我正在跟踪有关Hadoop和Hive的Apache站点的入门指南。我已经将Hadoop配置为在伪分布式操作中运行。我能够运行hdfs操作,开始直线,创建表,插入数据,等等。唯一的问题是,我希望数据库存储在HDFS上的/user/hive/仓库中,但是它们是在本地文件系统上以相同的路径创建的。
这里是我的版本和吐露:
hadoop@precise64:/data/hadoop-2.8.2/logs$ hadoop version
Hadoop 2.8.2
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 66c47f2a01ad9637879e95f80c41f798373828fb
Compiled by jdu on 2017-10-19T20:39Z
Compiled with protoc 2.5.0
From source with checksum dce55e5afe30c210816b39b631a53b1d
This command was run using /data/hadoop-2.8.2/share/hadoop/common/hadoop-common-2.8.2.jar
hadoop@precise64:/data/hadoop-2.8.2/logs$ hive --version
Hive 2.3.2
Git git://stakiar-MBP.local/Users/stakiar/Desktop/scratch-space/apache-hive -r 857a9fd8ad725a53bd95c1b2d6612f9b1155f44d
Compiled by stakiar on Thu Nov 9 09:11:39 PST 2017
From source with checksum dc38920061a4eb32c4d15ebd5429ac8a
hadoop@precise64:/data/hadoop-2.8.2/logs$ cat $HADOOP_HOME/etc/hadoop/yarn-site.xml
<?xml version="1.0"?>
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
hadoop@precise64:/data/hadoop-2.8.2/logs$ cat $HADOOP_HOME/etc/hadoop/core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.proxyuser.hive.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hive.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>
</configuration>
hadoop@precise64:/data/hadoop-2.8.2/logs$ cat $HADOOP_HOME/etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hadoopinfra/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hadoopinfra/hdfs/datanode</value>
</property>
</configuration>
hadoop@precise64:/data/apache-hive-2.3.2-bin/conf$ cat hive-site.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=metastore_db;create=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>org.apache.derby.jdbc.EmbeddedDriver</value>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/home/hadoop/tmp</value>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/home/hadoop/tmp/${hive.session.id}_resources</value>
</property>
<property>
<name>hive.querylog.location</name>
<value>/home/hadoop/tmp</value>
</property>
<property>
<name>hive.server2.logging.operation.log.location</name>
<value>/home/hadoop/tmp/operation_logs</value>
</property>
</configuration>发布于 2017-11-27 00:49:18
听起来你还没配置过蜂巢
默认情况下,这就是您得到的
元数据存储在嵌入式Derby数据库中,该数据库的磁盘存储位置由名为javax.jdo.option.ConnectionURL的Hive配置变量确定。默认情况下,这个位置是./metastore_db (参见conf/hive-default.xml)。
你的人际关系很有限
在嵌入式模式下使用Derby一次最多只允许一个用户。
建议使用Postgres、MySQL或Oracle作为亚稳态(远程转移)
https://stackoverflow.com/questions/47497003
复制相似问题