前面我们熟悉了通过spark访问mysql,这一节我们将了解通过spark通过hive
cp /root/apache-hive-0.14.0-bin/conf/hive-site.xml /root/spark-2.2.1-bin-hadoop2.7/conf
<property>
<name>hive.metastore.uris</name>
<value>thrift://danji:9083</value>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
export SPARK_DIST_CLASSPATH=$(/root/hadoop-2.5.2/bin/hadoop classpath)
export JAVA_HOME=/root/jdk1.8.0_152
export SPARK_HOME=/root/spark-2.2.1-bin-hadoop2.7
export SPARK_MASTER_IP=danji
export SPARK_EXECUTOR_MEMORY=1G
export SCALA_HOME=/root/scala-2.12.2
export HADOOP_HOME=/root/hadoop-2.5.2
export HIVE_CONF_DIR=/root/apache-hive-0.14.0-bin/conf
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_CLASSPATH=$SPARK_CLASSPATH:/root/spark-2.2.1-bin-hadoop2.7/bin/mysql-connector-java-5.1.47.jar
cd /root/spark-2.2.1-bin-hadoop2.7/sbin
./stop-all.sh
./start-all.sh
cd /root/apache-hive-0.14.0-bin/bin
./hive --service metastore
cd /root/spark-2.2.1-bin-hadoop2.7/bin
./spark-shell
# 执行以下scala语句,用于展示有哪些表
scala> spark.sql("show tables").show()
cd /root/spark-2.2.1-bin-hadoop2.7/bin
./spark-sql
# 执行以下类sql语句,用于展示有哪些表
spark-sql> show tables;
以上就是我们在spark中访问hive的过程。
原文:https://www.cnblogs.com/alichengxuyuan/p/12576812.html