首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Spark3.0 UTC到AKST转换失败与ZoneRulesException:未知时区ID

Spark3.0 UTC到AKST转换失败与ZoneRulesException:未知时区ID
EN

Stack Overflow用户
提问于 2021-01-25 08:09:23
回答 2查看 680关注 0票数 2

我无法将UTC的时间戳转换为Spark3.0中的AKST时区。同样的工作在火花2.4。所有其他转换工作( EST,PST,MST等)。

感谢有关如何修复此错误的任何输入。

以下命令:

代码语言:javascript
复制
spark.sql("select from_utc_timestamp('2020-10-01 11:12:30', 'AKST')").show

返回错误:

java.time.zone.ZoneRulesException:未知时区ID: AKST

详细日志:

代码语言:javascript
复制
java.time.zone.ZoneRulesException: Unknown time-zone ID: AKST
  at java.time.zone.ZoneRulesProvider.getProvider(ZoneRulesProvider.java:272)
  at java.time.zone.ZoneRulesProvider.getRules(ZoneRulesProvider.java:227)
  at java.time.ZoneRegion.ofId(ZoneRegion.java:120)
  at java.time.ZoneId.of(ZoneId.java:411)
  at java.time.ZoneId.of(ZoneId.java:359)
  at java.time.ZoneId.of(ZoneId.java:315)
  at org.apache.spark.sql.catalyst.util.DateTimeUtils$.getZoneId(DateTimeUtils.scala:62)
  at org.apache.spark.sql.catalyst.util.DateTimeUtils$.fromUTCTime(DateTimeUtils.scala:833)
  at org.apache.spark.sql.catalyst.expressions.FromUTCTimestamp.nullSafeEval(datetimeExpressions.scala:1299)
  at org.apache.spark.sql.catalyst.expressions.BinaryExpression.eval(Expression.scala:552)
  at org.apache.spark.sql.catalyst.expressions.UnaryExpression.eval(Expression.scala:457)
  at org.apache.spark.sql.catalyst.optimizer.ConstantFolding$$anonfun$apply$1$$anonfun$applyOrElse$1.applyOrElse(expressions.scala:52)
  at org.apache.spark.sql.catalyst.optimizer.ConstantFolding$$anonfun$apply$1$$anonfun$applyOrElse$1.applyOrElse(expressions.scala:45)
  at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$1(TreeNode.scala:321)
  at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:72)
  at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:321)
  at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$3(TreeNode.scala:326)
  at org.apache.spark.sql.catalyst.trees.TreeNode.applyFunctionIfChanged$1(TreeNode.scala:380)
  at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:416)
  at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:248)
  at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:414)
  at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:362)
  at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:326)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformExpressionsDown$1(QueryPlan.scala:96)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$1(QueryPlan.scala:118)
  at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:72)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:118)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:129)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$3(QueryPlan.scala:134)
  at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
  at scala.collection.immutable.List.foreach(List.scala:392)
  at scala.collection.TraversableLike.map(TraversableLike.scala:238)
  at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
  at scala.collection.immutable.List.map(List.scala:298)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:134)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$4(QueryPlan.scala:139)
  at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:248)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:139)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsDown(QueryPlan.scala:96)
  at org.apache.spark.sql.catalyst.optimizer.ConstantFolding$$anonfun$apply$1.applyOrElse(expressions.scala:45)
  at org.apache.spark.sql.catalyst.optimizer.ConstantFolding$$anonfun$apply$1.applyOrElse(expressions.scala:44)
  at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$1(TreeNode.scala:321)
  at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:72)
  at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:321)
  at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
  at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDown(AnalysisHelper.scala:149)
  at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDown$(AnalysisHelper.scala:147)
  at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
  at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
  at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$3(TreeNode.scala:326)
  at org.apache.spark.sql.catalyst.trees.TreeNode.applyFunctionIfChanged$1(TreeNode.scala:380)
  at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:416)
  at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:248)
  at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:414)
  at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:362)
  at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:326)
  at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
  at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDown(AnalysisHelper.scala:149)
  at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDown$(AnalysisHelper.scala:147)
  at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
  at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
  at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$3(TreeNode.scala:326)
  at org.apache.spark.sql.catalyst.trees.TreeNode.applyFunctionIfChanged$1(TreeNode.scala:380)
  at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:416)
  at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:248)
  at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:414)
  at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:362)
  at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:326)
  at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
  at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDown(AnalysisHelper.scala:149)
  at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDown$(AnalysisHelper.scala:147)
  at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
  at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
  at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:310)
  at org.apache.spark.sql.catalyst.optimizer.ConstantFolding$.apply(expressions.scala:44)
  at org.apache.spark.sql.catalyst.optimizer.ConstantFolding$.apply(expressions.scala:43)
  at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:149)
  at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
  at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
  at scala.collection.immutable.List.foldLeft(List.scala:89)
  at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:146)
  at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:138)
  at scala.collection.immutable.List.foreach(List.scala:392)
  at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:138)
  at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:116)
  at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:98)
  at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:116)
  at org.apache.spark.sql.execution.QueryExecution.$anonfun$optimizedPlan$1(QueryExecution.scala:82)
  at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:121)
  at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:153)
  at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
  at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:153)
  at org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:82)
  at org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:79)
  at org.apache.spark.sql.execution.QueryExecution.$anonfun$writePlans$4(QueryExecution.scala:217)
  at org.apache.spark.sql.catalyst.plans.QueryPlan$.append(QueryPlan.scala:381)
  at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$writePlans(QueryExecution.scala:217)
  at org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:227)
  at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:96)
  at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:207)
  at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:88)
  at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
  at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3653)
  at org.apache.spark.sql.Dataset.head(Dataset.scala:2737)
  at org.apache.spark.sql.Dataset.take(Dataset.scala:2944)
  at org.apache.spark.sql.Dataset.getRows(Dataset.scala:301)
  at org.apache.spark.sql.Dataset.showString(Dataset.scala:338)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:864)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:823)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:832)
  ... 47 elided
EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2021-01-25 08:42:44

似乎它不能理解AKST,但是Spark3似乎理解America/Anchorage,我想它应该有时区AKST

代码语言:javascript
复制
spark.sql("select from_utc_timestamp('2020-10-01 11:12:30', 'America/Anchorage')").show
票数 1
EN

Stack Overflow用户

发布于 2021-01-25 15:22:01

进一步增加了mck的答案。您正在使用旧的Java数据时区的短ID。根据这个Databricks博客文章Apache™3.0中的日期和时间戳综述,Spark从3.0版本开始迁移到新的API:

从Java 8开始,JDK就公开了一个用于日期时间操作和时区偏移解析的新API,Spark在3.0版中迁移到了这个新API。虽然时区名称到偏移量的映射具有相同的源IANA TZDB,但它在Java 8中的实现方式与Java 7不同。

您可以通过打开spark并列出如下可用区域来验证它:

代码语言:javascript
复制
import java.time.ZoneId
import scala.collection.JavaConverters._

ZoneId.SHORT_IDS.asScala.keys

//res0: Iterable[String] = Set(CTT, ART, CNT, PRT, PNT, PLT, AST, BST, CST, EST, HST, JST, IST, AGT, NST, MST, AET, BET, PST, ACT, SST, VST, CAT, ECT, EAT, IET, MIT, NET)

也就是说,在指定时区时不应该使用缩写,而应该使用area/city格式。请参阅不推荐哪个三个字母的时区ID?

票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/65881015

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档