首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >无法使用Pheonix保存到HBase中

无法使用Pheonix保存到HBase中
EN

Stack Overflow用户
提问于 2016-01-02 23:30:14
回答 1查看 778关注 0票数 0

我正在尝试示例代码,以将数据保存到HBase中。我不知道我哪里出了问题,但代码对我不起作用。

下面是我尝试过的代码。我能够获得现有表的RDD,但无法保存它。我试过几种方法,我已经提过了。

代码:

代码语言:javascript
复制
import scala.reflect.runtime.universe

import org.apache.hadoop.fs.Path
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.mapreduce.TableInputFormat
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.SaveMode

case class Person(id: String, name: String)
object PheonixTest extends App {
  val conf = new SparkConf;
  conf.setMaster("local");
  conf.setAppName("test")
  val sc = new SparkContext(conf)
  val sqlContext = new SQLContext(sc);

  val hbaseConf = HBaseConfiguration.create()
  hbaseConf.set(TableInputFormat.INPUT_TABLE, "table1")
  hbaseConf.addResource(new Path("/Users/srini/softwares/hbase-1.1.2/conf/hbase-site.xml"));

  import org.apache.phoenix.spark._;
  val phDf = sqlContext.phoenixTableAsDataFrame("table1", Array("id", "name"), conf = hbaseConf)

  println("===========>>>>>>>>>>>>>>>>>> " + phDf.show());

  val rdd = sc.parallelize(Seq("sr,Srini","sr2,Srini2"))
  import sqlContext.implicits._;

  val df = rdd.map { x => {val array = x.split(","); Person(array(0), array(1))} }.toDF;

  //df.write.format("org.apache.phoenix.spark").mode("overwrite") .option("table", "table1").option("zkUrl", "localhost:2181").save()

  //df.rdd.saveToP
  df.save("org.apache.phoenix.spark", SaveMode.Overwrite, Map("table" -> "table1", "zkUrl" -> "localhost:2181"))

  sc.stop()

}

Pom.xml

代码语言:javascript
复制
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.srini.plug</groupId>
    <artifactId>data-ingestion</artifactId>
    <version>1.0-SNAPSHOT</version>

    <dependencies>

        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>

        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.4</version>
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.10</artifactId>
            <version>1.5.2</version>
        </dependency>

        <dependency>
            <groupId>com.fasterxml.jackson.dataformat</groupId>
            <artifactId>jackson-dataformat-xml</artifactId>
            <version>2.4.4</version>
        </dependency>

        <dependency>
            <groupId>com.splunk</groupId>
            <artifactId>splunk</artifactId>
            <version>1.5.0.0</version>
        </dependency>


        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.10</artifactId>
            <version>1.5.2</version>
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming_2.10</artifactId>
            <version>1.5.2</version>
        </dependency>

        <dependency>
            <groupId>org.scalaj</groupId>
            <artifactId>scalaj-collection_2.10</artifactId>
            <version>1.5</version>
        </dependency>

        <dependency>
            <groupId>com.google.guava</groupId>
            <artifactId>guava</artifactId>
            <version>12.0</version>
        </dependency>

        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-client</artifactId>
            <version>1.1.2</version>
        </dependency>

        <dependency>
            <groupId>org.apache.phoenix</groupId>
            <artifactId>phoenix-spark</artifactId>
            <version>4.6.0-HBase-1.1</version>
        </dependency>

        <dependency>
            <groupId>com.datastax.spark</groupId>
            <artifactId>spark-cassandra-connector_2.10</artifactId>
            <version>1.4.1</version>
        </dependency>


    </dependencies>

    <repositories>
        <repository>
            <id>ext-release-local</id>
            <url>http://splunk.artifactoryonline.com/splunk/ext-releases-local</url>
        </repository>
    </repositories>

    <build>
        <plugins>
            <plugin>
                <groupId>org.scala-tools</groupId>
                <artifactId>maven-scala-plugin</artifactId>

                <executions>
                    <execution>
                        <id>compile</id>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                        <phase>compile</phase>
                    </execution>

                    <execution>
                        <id>test-compile</id>
                        <goals>
                            <goal>testCompile</goal>
                        </goals>
                        <phase>test-compile</phase>
                    </execution>

                    <execution>
                        <phase>process-resources</phase>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>

            <plugin>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>1.5</source>
                    <target>1.5</target>
                </configuration>
            </plugin>

            <plugin>
                <artifactId>maven-assembly-plugin</artifactId>
                <version>2.5.3</version>
                <executions>
                    <execution>
                        <id>create-archive</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>

                        <configuration>
                            <descriptorRefs>
                                <descriptorRef>
                                    jar-with-dependencies
                                </descriptorRef>
                            </descriptorRefs>
                            <archive>
                                <manifest>
                                    <mainClass>com.srini.ingest.SplunkSearch</mainClass>
                                </manifest>
                            </archive>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

错误:

代码语言:javascript
复制
16/01/02 18:26:29 INFO ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x152031ff8da001c, negotiated timeout = 90000
16/01/02 18:27:18 INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=48344 ms ago, cancelled=false, msg=
16/01/02 18:27:38 INFO RpcRetryingCaller: Call exception, tries=11, retries=35, started=68454 ms ago, cancelled=false, msg=
16/01/02 18:27:58 INFO RpcRetryingCaller: Call exception, tries=12, retries=35, started=88633 ms ago, cancelled=false, msg=
16/01/02 18:28:19 INFO RpcRetryingCaller: Call exception, tries=13, retries=35, started=108817 ms ago, cancelled=false, msg=
EN

回答 1

Stack Overflow用户

发布于 2016-01-04 13:33:41

我注意到两个问题

  1. Zk url.如果您确信动物园管理员正在本地运行,请使用下面这样的条目更新您的主机文件,并将主机名传递给HBaseConfiguration。 ipaddress hostname
  2. 默认情况下,菲尼克斯大写为您的表名和列。因此,将上面的代码更改为val phDf = sqlContext.phoenixTableAsDataFrame("TABLE1", Array("ID", "NAME"), conf = hbaseConf)
票数 -1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/34571811

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档