首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >无法创建配置单元连接jdbc:hive2://localhost:10000。spark-在集群模式下提交

无法创建配置单元连接jdbc:hive2://localhost:10000。spark-在集群模式下提交
EN

Stack Overflow用户
提问于 2021-02-16 16:45:38
回答 2查看 387关注 0票数 1

我在Apache Spark上运行Apache Hudi应用程序。当我在客户端模式下提交应用程序时,它工作得很好,但是当我在集群模式下提交应用程序时,收到一个错误

代码语言:javascript
复制
py4j.protocol.Py4JJavaError: An error occurred while calling o196.save.
: org.apache.hudi.hive.HoodieHiveSyncException: Cannot create hive connection jdbc:hive2://localhost:10000/
    at org.apache.hudi.hive.HoodieHiveClient.createHiveConnection(HoodieHiveClient.java:422)
    at org.apache.hudi.hive.HoodieHiveClient.<init>(HoodieHiveClient.java:95)
    at org.apache.hudi.hive.HiveSyncTool.<init>(HiveSyncTool.java:66)
    at org.apache.hudi.HoodieSparkSqlWriter$.org$apache$hudi$HoodieSparkSqlWriter$$syncHive(HoodieSparkSqlWriter.scala:321)
    at org.apache.hudi.HoodieSparkSqlWriter$$anonfun$metaSync$2.apply(HoodieSparkSqlWriter.scala:363)
    at org.apache.hudi.HoodieSparkSqlWriter$$anonfun$metaSync$2.apply(HoodieSparkSqlWriter.scala:359)
    at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
    at org.apache.hudi.HoodieSparkSqlWriter$.metaSync(HoodieSparkSqlWriter.scala:359)
    at org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:417)
    at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:205)
    at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:125)
    at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:173)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:169)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:197)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:194)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:169)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:114)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:112)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:696)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:696)
    at org.apache.spark.sql.execution.SQLExecution$.org$apache$spark$sql$execution$SQLExecution$$executeQuery$1(SQLExecution.scala:83)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1$$anonfun$apply$1.apply(SQLExecution.scala:94)
    at org.apache.spark.sql.execution.QueryExecutionMetrics$.withMetrics(QueryExecutionMetrics.scala:141)
    at org.apache.spark.sql.execution.SQLExecution$.org$apache$spark$sql$execution$SQLExecution$$withMetrics(SQLExecution.scala:178)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:93)
EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2021-02-17 13:40:37

在修改了"hoodie.datasource.hive_sync.jdbcurl“的hudi配置后,它开始工作。

票数 2
EN

Stack Overflow用户

发布于 2021-10-11 13:35:08

以下是我正在使用的hudi写选项,如果使用适当的安全组和子网设置正确地配置了EMR集群,这些选项是有效的

代码语言:javascript
复制
hudi_write_table_options = {
        "hoodie.table.name": "hudi_data_test",
        "hoodie.datasource.write.table.type": "MERGE_ON_READ",
        "hoodie.datasource.write.storage.type": "MERGE_ON_READ",
        "hoodie.datasource.write.recordkey.field": ['a','b'],
        "hoodie.datasource.write.partitionpath.field": ['a','b'],
        "hoodie.datasource.write.precombine.field": 'c',
        "hoodie.datasource.write.keygenerator.class": "org.apache.hudi.keygen.ComplexKeyGenerator",
        "hoodie.datasource.write.operation": "bulk_insert",
        "hoodie.consistency.check.enabled": "true",
        "hoodie.datasource.write.hive_style_partitioning": "true",
        "hoodie.datasource.hive_sync.enable": "true",
        "hoodie.datasource.hive_sync.auto_create_database":"true",
        "hoodie.datasource.hive_sync.database":"hudidatabase",
        "hoodie.datasource.hive_sync.table": "hudi_data_test",
        "hoodie.datasource.hive_sync.partition_fields": ['a','b'],
        'hoodie.datasource.hive_sync.jdbcurl':"jdbc:hive2://ip-XXX-XX-XX-XX.ec2.internal:10000/",
        "hoodie.datasource.hive_sync.partition_extractor_class": "org.apache.hudi.hive.MultiPartKeysValueExtractor"
    }
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/66221153

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档