我写了一个用于批量加载Phoenix Table的Spark应用程序。现在一切都正常工作了几个星期,但有几天我遇到了一些重复行的问题。这是由错误的表统计信息引起的。但是,一种可能的解决方法是删除并重新生成此表的统计数据。
因此,我需要打开到Phoenix数据库的JDBC连接,并调用用于删除和创建统计数据的语句。
由于我需要在通过Spark发送新数据之后执行此操作,因此我还希望在完成表大容量加载之后,在我的Spark Job中创建并使用此JDBC连接。
为此,我添加了以下方法,并在我的Java代码中的dataframe.save()和sparkContext.close()方法之间调用它:
private static void updatePhoenixTableStatistics(String phoenixTableName) {
try {
Class.forName("org.apache.phoenix.jdbc.PhoenixDriver");
System.out.println("Connecting to database..");
Connection conn = DriverManager.getConnection("jdbc:phoenix:my-server.net:2181:/hbase-unsecure");
System.out.println("Creating statement...");
Statement st = conn.createStatement();
st.executeUpdate("DELETE FROM SYSTEM.STATS WHERE physical_name='" + phoenixTableName + "'");
System.out.println("Successfully deleted statistics data... Now refreshing it.");
st.executeUpdate("UPDATE STATISTICS " + phoenixTableName + " ALL");
System.out.println("Successfully refreshed statistics data.");
st.close();
conn.close();
System.out.println("Connection closed.");
} catch (Exception e) {
System.out.println("Unable to update table statistics - Skipping this step!");
e.printStackTrace();
}
}问题是,自从我添加了这个方法,我总是在我的Spark Job结束时得到以下异常:
Bulk-Load: DataFrame.save() completed - Import finished successfully!
Updating Table Statistics:
Connecting to database..
Creating statement...
Successfully deleted statistics data... Now refreshing it.
Successfully refreshed statistics data.
Connection closed.
Exception in thread "Thread-31" java.lang.RuntimeException: java.io.FileNotFoundException: /tmp/spark-e5b01508-0f84-4702-9684-4f6ceac803f9/gk-journal-importer-phoenix-0.0.3h.jar (No such file or directory)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2794)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2646)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2518)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1065)
at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1119)
at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1520)
at org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:68)
at org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:82)
at org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:97)
at org.apache.phoenix.query.ConfigurationFactory$ConfigurationFactoryImpl$1.call(ConfigurationFactory.java:49)
at org.apache.phoenix.query.ConfigurationFactory$ConfigurationFactoryImpl$1.call(ConfigurationFactory.java:46)
at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
at org.apache.phoenix.util.PhoenixContextExecutor.callWithoutPropagation(PhoenixContextExecutor.java:93)
at org.apache.phoenix.query.ConfigurationFactory$ConfigurationFactoryImpl.getConfiguration(ConfigurationFactory.java:46)
at org.apache.phoenix.jdbc.PhoenixDriver$1.run(PhoenixDriver.java:88)
Caused by: java.io.FileNotFoundException: /tmp/spark-e5b01508-0f84-4702-9684-4f6ceac803f9/gk-journal-importer-phoenix-0.0.3h.jar (No such file or directory)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:225)
at java.util.zip.ZipFile.<init>(ZipFile.java:155)
at java.util.jar.JarFile.<init>(JarFile.java:166)
at java.util.jar.JarFile.<init>(JarFile.java:103)
at sun.net.www.protocol.jar.URLJarFile.<init>(URLJarFile.java:93)
at sun.net.www.protocol.jar.URLJarFile.getJarFile(URLJarFile.java:69)
at sun.net.www.protocol.jar.JarFileFactory.get(JarFileFactory.java:99)
at sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:122)
at sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:152)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2612)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2693)
... 14 more有没有人知道这个问题并能帮上忙?Spark Job中的JDBC通常是如何工作的?或者,有没有其他的可能性呢?
我正在使用安装了Spark 2.3和Phoenix 4.7的HDP 2.6.5。谢谢你的帮忙!
发布于 2019-02-20 17:58:55
我找到了我的问题的解决方案。我导出的jar包含phoenix-spark2和phoenix-client依赖项,并包含在我的jar文件中。
我将这些依赖项(因为它们已经存在于我的集群HDP安装中)更改为提供的范围:
<dependency>
<groupId>org.apache.phoenix</groupId>
<artifactId>phoenix-spark2</artifactId>
<version>4.7.0.2.6.5.0-292</version>
<scope>provided</scope> <!-- this did it, now have to add --jar to spark-submit -->
</dependency>
<dependency>
<groupId>org.apache.phoenix</groupId>
<artifactId>phoenix-core</artifactId>
<version>4.7.0.2.6.5.0-292</version>
<scope>provided</scope> <!-- this did it, now have to add --jar to spark-submit -->
</dependency>现在,我使用--jars选项开始我的Spark作业,并将这些依赖项链接到那里。现在它在yarn客户端模式下工作得很好。
spark-submit --class spark.dataimport.SparkImportApp --master yarn --deploy-mode client --jars /usr/hdp/current/phoenix-client/phoenix-spark2.jar,/usr/hdp/current/phoenix-client/phoenix-client.jar hdfs:/user/test/gk-journal-importer-phoenix-0.0.3h.jar <some parameters for the main method>PS:在纱线集群模式下,应用程序一直在工作(也可以使用包含依赖项的-jar)。
https://stackoverflow.com/questions/54767929
复制相似问题