我想使用flink sql client创建一个配置单元表。
我可以成功创建表t2,但是当我查询t2时,它会报错
Table options do not contain an option key 'connector' for discovering a connector.我已经在conf/sql-client-defaults.yaml文件中设置执行类型为batch。
我会问这里有什么问题。谢谢!
Flink SQL> use testdb1;
Flink SQL> create table t2(id int,name string);
[INFO] Table has been created.
Flink SQL> select * from t2;
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: Table options do not contain an option key 'connector' for discovering a connector.发布于 2020-12-21 22:40:38
问题是Flink不知道在哪里查找或放置t2 --它需要与某些数据源或接收器相关联,比如文件、kafka主题或jdbc数据库。您还需要指定一种格式,以便可以序列化/反序列化数据。例如:
CREATE TABLE KafkaTable (
`id` BIGINT,
`name` STRING
) WITH (
'connector' = 'kafka',
'topic' = 'data',
'properties.bootstrap.servers' = 'localhost:9092',
'properties.group.id' = 'testGroup',
'scan.startup.mode' = 'earliest-offset',
'format' = 'csv'
)有关详细信息,请参阅您正在使用的特定连接器的docs。
在配置单元的具体情况下,请参阅Hive Read & Write。下面是一个设置用于写入配置单元here的表的示例,如下所示:
SET table.sql-dialect=hive;
CREATE TABLE hive_table (
id BIGINT,
name STRING
) PARTITIONED BY (dt STRING, hr STRING) STORED AS parquet TBLPROPERTIES (
'partition.time-extractor.timestamp-pattern'='$dt $hr:00:00',
'sink.partition-commit.trigger'='partition-time',
'sink.partition-commit.delay'='1 h',
'sink.partition-commit.policy.kind'='metastore,success-file'
);https://stackoverflow.com/questions/65387785
复制相似问题