在进行使用hive查询表数据的时候,抛出异常
hive> select*from blackList;
FAILED: SemanticException Unable to determine if hdfs://node1:8020/opt/hive/warehouse is encrypted: java.lang.IllegalArgumentException: Wrong FS: hdfs://node1:8020/opt/hive/warehouse, expected: hdfs://hadoop-node1.com:8020
这个是因为我改变了hdfs nameNode的端口导致的,是hive元数据的问题,我的hive元数据是保存在mysql上,mysql中保存的还是以前的nameNode的端口。
我hive的在mysql中的数据库是sparkStreaming_db,
use sparkStreaming_db;(进入到数据库)
show tables;     (查看该数据库中所有表)
其中有两张表
DBS  : Hive数据仓库的路径
SDS  : Hive每张表对应的路径
mysql> select*from DBS;
+-------+-----------------------+-------------------------------------------------+---------+------------+------------+
| DB_ID | DESC                  | DB_LOCATION_URI                                 | NAME    | OWNER_NAME | OWNER_TYPE |
+-------+-----------------------+-------------------------------------------------+---------+------------+------------+
|     1 | Default Hive database | hdfs://hadoop-node1.com:9000/opt/hive/warehouse | default | public     | ROLE       |
+-------+-----------------------+-------------------------------------------------+---------+------------+------------+
 hdfs://hadoop-node1.com:9000 是修改前的端口
mysql> select*from SDS;
+-------+-------+------------------------------------------+---------------+---------------------------+----------------------------------------+-------------        
| SD_ID | CD_ID | INPUT_FORMAT                             | IS_COMPRESSED | IS_STOREDASSUBDIRECTORIES | LOCATION                               | NUM_BUCKETS | OUTPUT_FORMAT                                              | SERDE_ID |
+-------+-------+------------------------------------------+---------------+---------------------------+----------------------------------------+-------------   		+------------------------------------------------------------+----------+
|     3 |     3 | org.apache.hadoop.mapred.TextInputFormat |               |                           | hdfs://node1:8020/opt/hive/hive_tables |          -1 | org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat |        3 |
|     4 |     4 | org.apache.hadoop.mapred.TextInputFormat |               |                           | hdfs://node1:8020/opt/hive/hive_tables |          -1 | org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat |        4 |
+-------+-------+------------------------------------------+---------------+---------------------------+----------------------------------------+-------------
从这里可以发现地址(hdfs://hadoop-node1.com:9000, hdfs://node1:8020)和端口是不对的,即便是hadoop-node.com和node1指向的都是同一个ip也是不行的,名称必须要一样
端口也要一样。后面存储的地址可以不同的
修改命令:
update DBS t set t.DB_LOCATION_URI=‘hdfs://hadoop-node1.com:8020/opt/hive/warehouse‘ where t.DB_ID=‘1‘
	     update SDS t set t.LOCATION=‘hdfs://hadoop-node1.com:8020/opt/hive/tables‘ where t.DB_ID=‘3‘
原文:http://www.cnblogs.com/zhangXingSheng/p/7073584.html