我试图在Databricks环境中设置Apache,并在Spark中执行MERGE语句时遇到错误。
这个代码:
CREATE TABLE iceberg.db.table (id bigint, data string) USING iceberg;
INSERT INTO iceberg.db.table VALUES (1, 'a'), (2, 'b'), (3, 'c');
INSERT INTO iceberg.db.table SELECT id, data FROM (select * from iceberg.db.table) t WHERE length(data) = 1;
MERGE INTO iceberg.db.table t USING (SELECT * FROM iceberg.db.table) u ON t.id = u.id
WHEN NOT MATCHED THEN INSERT *生成此错误:
Error in SQL statement: AnalysisException: MERGE destination only supports Delta sources.
Some(RelationV2[id#116L, data#117] iceberg.db.table我认为问题的根源在于MERGE也是Delta引擎的关键字。据我所知,这个问题源于斯派克试图执行计划的顺序。MERGE触发增量规则,然后抛出一个错误,因为它不是增量表。我能够阅读,附加和覆盖到冰山表,没有问题。
主要问题:如何才能让Spark将其识别为Iceberg查询而不是Delta?还是可以完全删除与增量相关的SQL规则?
环境
火花版本:3.0.1
Databricks运行时版本:7.6
Iceberg向吐露
spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
spark.sql.catalog.iceberg=org.apache.iceberg.spark.SparkCatalog
spark.sql.catalog.iceberg.type=hadoop
spark.sql.catalog.iceberg.warehouse=BLOB_STORAGE_CONTAINER堆栈跟踪:
com.databricks.backend.common.rpc.DatabricksExceptions$SQLExecutionException: org.apache.spark.sql.AnalysisException: MERGE destination only supports Delta sources.
Some(RelationV2[id#116L, data#117] iceberg.db.table
);
at com.databricks.sql.transaction.tahoe.DeltaErrors$.notADeltaSourceException(DeltaErrors.scala:343)
at com.databricks.sql.transaction.tahoe.PreprocessTableMerge.apply(PreprocessTableMerge.scala:201)
at com.databricks.sql.transaction.tahoe.PreprocessTableMergeEdge$$anonfun$apply$1.applyOrElse(PreprocessTableMergeEdge.scala:39)
at com.databricks.sql.transaction.tahoe.PreprocessTableMergeEdge$$anonfun$apply$1.applyOrElse(PreprocessTableMergeEdge.scala:36)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$2(AnalysisHelper.scala:112)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:82)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$1(AnalysisHelper.scala:112)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:216)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown(AnalysisHelper.scala:110)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown$(AnalysisHelper.scala:108)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators(AnalysisHelper.scala:73)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators$(AnalysisHelper.scala:72)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)
at com.databricks.sql.transaction.tahoe.PreprocessTableMergeEdge.apply(PreprocessTableMergeEdge.scala:36)
at com.databricks.sql.transaction.tahoe.PreprocessTableMergeEdge.apply(PreprocessTableMergeEdge.scala:29)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:152)```发布于 2021-09-24 20:06:33
我认为这里的错误是Databricks总是抢先添加到Spark会话中的其他扩展。这意味着您不能执行Iceberg代码页,而且将只使用Databricks扩展。我会问您的Databricks代表,是否有一种方法允许首先放置Iceberg扩展,或者他们是否可以考虑允许其他合并实现。
发布于 2021-10-11 07:42:01
只允许对非增量源执行插入操作。不允许进行删除和合并操作。
发布于 2022-03-21 03:31:53
不完全是你想要的,但Databricks允许将Iceberg表(没有数据复制)转换为Delta表--
https://docs.databricks.com/delta/delta-utility.html#convert-iceberg-to-delta
需要DBR 10.4+
-- Convert the Iceberg table in the path <path-to-table>.
CONVERT TO DELTA iceberg.`<path-to-table>`
-- Convert the Iceberg table in the path <path-to-table> without collecting statistics.
CONVERT TO DELTA iceberg.`<path-to-table>` NO STATISTICS然后在Delta表上运行MERGE。
如果Iceberg有相同的Iceberg到Delta就地升级(我不确定),这将解决原来的问题。
https://stackoverflow.com/questions/67893372
复制相似问题