首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >简单的TableAPI SQL查询在Flink 1.10和Blink上不起作用

简单的TableAPI SQL查询在Flink 1.10和Blink上不起作用
EN

Stack Overflow用户
提问于 2020-06-01 23:41:33
回答 2查看 984关注 0票数 0

我想使用TableAPI定义Kafka连接器,并在这样的表上运行SQL (由Kafka支持)。不幸的是,Rowtime定义似乎并不像预期的那样工作。

下面是一个可重复使用的示例:

代码语言:javascript
复制
object DefineSource extends App {

  import org.apache.flink.streaming.api.scala._
  import org.apache.flink.table.api.scala._

  val env = StreamExecutionEnvironment.getExecutionEnvironment
  env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)

  val config = EnvironmentSettings.newInstance().inStreamingMode().useBlinkPlanner().build()
  val tEnv = StreamTableEnvironment.create(env, config)

  val rowtime = new Rowtime().watermarksPeriodicBounded(5000)
  val schema = new Schema()
    .field("k", "string")
    .field("ts", "timestamp(3)").rowtime(rowtime)

  tEnv.connect(new Kafka()
    .topic("test")
    .version("universal"))
    .withSchema(schema)
    .withFormat(new Csv())
    .createTemporaryTable("InputTable")

  val output = tEnv.sqlQuery(
    """SELECT k, COUNT(*)
      |  FROM InputTable
      | GROUP BY k, TUMBLE(ts, INTERVAL '15' MINUTE)
      |""".stripMargin
  )

  tEnv.toAppendStream[(String, Long)](output).print()

  env.execute()
}

哪一项会产生

代码语言:javascript
复制
org.apache.flink.table.api.TableException: Window aggregate can only be defined over a time attribute column, but TIMESTAMP(3) encountered.
    at org.apache.flink.table.planner.plan.rules.logical.StreamLogicalWindowAggregateRule.getInAggregateGroupExpression(StreamLogicalWindowAggregateRule.scala:51)
    at org.apache.flink.table.planner.plan.rules.logical.LogicalWindowAggregateRuleBase.onMatch(LogicalWindowAggregateRuleBase.scala:79)
    at org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:319)
    at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:560)
    at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:419)
    at org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:256)
    at org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
    at org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:215)
    at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:202)
    at org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
    at org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
    at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:62)
    at scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:160)
    at scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:160)
    at scala.collection.Iterator.foreach(Iterator.scala:941)
    at scala.collection.Iterator.foreach$(Iterator.scala:941)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
    at scala.collection.IterableLike.foreach(IterableLike.scala:74)
    at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
    at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:160)
    at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:158)
    at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:108)
    at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:58)
    at org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeTree(StreamCommonSubGraphBasedOptimizer.scala:170)
    at org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.doOptimize(StreamCommonSubGraphBasedOptimizer.scala:94)
    at org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
    at org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:248)
    at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:151)
    at org.apache.flink.table.api.scala.internal.StreamTableEnvironmentImpl.toDataStream(StreamTableEnvironmentImpl.scala:210)
    at org.apache.flink.table.api.scala.internal.StreamTableEnvironmentImpl.toAppendStream(StreamTableEnvironmentImpl.scala:107)

我在Flink 1.10.0上。

EN

回答 2

Stack Overflow用户

发布于 2020-06-03 09:36:59

这是一个错误并修复了1.10.0+ https://issues.apache.org/jira/browse/FLINK-16160

票数 0
EN

Stack Overflow用户

发布于 2020-06-04 14:57:50

不幸的是,这是1.10版本中的一个错误,正如@lijiayan所说,这个错误应该在1.11+中修复

作为1.10中的一种解决方法,您可以改用DDL:

代码语言:javascript
复制
tEnv.sqlUpdate(
"CREATE TABLE InputTable (\n" +
"    k STRING,\n" +
"    ts TIMESTAMP(3),\n" +
"    WATERMARK FOR ts AS ts - INTERVAL '5' SECOND\n" +
") WITH (\n" + 
" 'connector.type' = 'kafka',\n" + 
" 'connector.version' = 'universal',\n" +
" 'connector.topic' = 'test',\n" +
" 'connector.properties.zookeeper.connect' = 'localhost:2181',\n" +
" 'connector.properties.bootstrap.servers' = 'localhost:9092',\n" +
" 'format.type' = 'csv'\n" +
")"
);
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/62135758

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档