hue 安装笔记

hue 安装笔记本文主要记录 hue 的 yum 源下的安装与配置 Hue 集成 Hdfs Hive Impala Yarn Kerberos LDAP Sentry Solr 等 nbsp nbsp 集群情况 192 168 211 178 HA active nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp 192 168 211 179 datanode nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp 192 168 211 180

本文主要记录hue的yum源下的安装与配置,Hue 集成 Hdfs、Hive、Impala、Yarn、Kerberos、LDAP、Sentry、Solr 等 

 

集群情况:192.168.211.178(HA-active)

                    192.168.211.179(datanode)

                    192.168.211.180(HA-standby)

                    192.168.211.185(datanode)

                    192.168.211.253(datanode)

一、 安装 Hue

hue安装机器:192.168.211.179

 

$ yum install hue hue-server 
$ yum install hadoop-httpfs 

 

二、 配置hue server

修改配置文件:hue.ini

 

  
  1. [desktop]
  2. secret_key=test123 #任意值,越复杂越好,用来加密用的
  3. http_host=192.168.211.179 #安装hue的机器
  4. http_port=8888 #默认端口,可自定义
  5. use_cherrypy_server= True #决定启动某个web服务
  6. server_user=hue
  7. server_group=hue
  8. enable_server=yes
  9. [hadoop]
  10. [[hdfs_clusters]]
  11. # HA support by using HttpFs
  12. [[[default]]]
  13. fs_defaultfs=hdfs://BIService #对应core-site.xml
  14. webhdfs_url=http://192.168.211.178:14000/webhdfs/v1 #对应HA的HttpFs

 

如果集群使用MR1提交job则配置mapred_clusters,否则配置yarn_clusters(MR2),二者选其一

 

  
  1. [[mapred_clusters]]
  2. # HA support by specifying multiple configs
  3. [[[default]]]
  4. submit_to=False #标明不使用MR1提交job
  5.  
  6. [[yarn_clusters]]
  7. [[[default]]]
  8. # Enter the host on which you are runningthe ResourceManager
  9. #对应yarn-site.xml中的 yarn.resourcemanager.address
  10. resourcemanager_host=192.168.211.178
  11. resourcemanager_port=18040
  12.  
  13. # Whether to submit jobs to this cluster
  14. submit_to=True #标明使用MR1提交job
  15.  
  16. # Defaults to $HADOOP_CONF_DIR or/etc/hadoop/conf
  17. hadoop_conf_dir=/etc/hadoop/conf
  18.  
  
  1. # URL of the ResourceManager API
  2. #对应yarn-site.xml中的 yarn.resourcemanager.webapp.addres
  3. resourcemanager_api_url=http://192.168.211.178:18088
  4.  
  5. [beeswax] #设置hive
  6.  
  7. #Thehostname or IP that the Hive Server should bind to. By default it #binds to localhost,and therefore only serves local IPC clients.
  8. #对应hive-site.xml的属性hive.server2.thrift.port
  9. beeswax_server_host=localhost
  10. server_interface=hiveserver2
  11. beeswax_server_port=10001
  12. # Host where internal metastore Thrift daemonis running.
  13. beeswax_meta_server_host=localhost
  14. beeswax_meta_server_host=localhost
  15.  
  16. [impala]
  17. # Host of the Impala Server (one of theImpalad)
  18. server_host=localhost

 

 

如果集群式HA的则:配置方式选择如下的HttpFS模式(HA)

如果集群为非HA配置,择选 WebHDFS的配置方式

 

配置HttpFS模式(HA)

HttpFS: Verify that /etc/hadoop-httpfs/conf/httpfs-site.xml has the following configuration:

 

  
  1. <!-- Hue HttpFS proxy user setting-->
  2. <property>
  3. <name>httpfs.proxyuser.hue.hosts</name>
  4. <value>*</value>
  5. </property>
  6. <property>
  7. <name>httpfs.proxyuser.hue.groups</name>
  8. <value>*</value>
  9. </property>

 

Verify that core-site.xml has the followingconfiguration:

 

  
  1. <property>
  2. <name>hadoop.proxyuser.httpfs.hosts</name>
  3. <value>*</value>
  4. </property>
  5. <property>
  6. <name>hadoop.proxyuser.httpfs.groups</name>
  7. <value>*</value>
  8. </property>

 

If the configuration is not present, add it to /etc/hadoop/conf/core-site.xml and restart Hadoop.

配置WebHDFS(非HA模式)

1.

a)   Add thefollowing property in hdfs-site.xml to enableWebHDFS in the NameNode and DataNodes:

  
  
  1. <property>
  2. <name>dfs.webhdfs.enabled</name>
  3. <value>true</value>
  4. </property>

b)   Restart your HDFS cluster.

2.

Configure Hue as a proxy userfor all other users and groups, meaning it may submit a request on behalf ofany other user:

Add to core-site.xml:

  
  
  1. <!-- Hue WebHDFS proxy user setting -->
  2. <property>
  3. <name>hadoop.proxyuser.hue.hosts</name>
  4. <value>*</value>
  5. </property>
  6. <property>
  7. <name>hadoop.proxyuser.hue.groups</name>
  8. <value>*</value>
  9. </property>

 

3. Verify that core-site.xml has the following configuration:

  
  
  1. <property>
  2. <name>hadoop.proxyuser.httpfs.hosts</name>
  3. <value>*</value>
  4. </property>
  5. <property>
  6. <name>hadoop.proxyuser.httpfs.groups</name>
  7. <value>*</value>
  8. </property>

If the configuration is not present, add it to /etc/hadoop/conf/core-site.xml and restartHadoop.

三、 启动

 

  
  1. [root@slave1 hue]# servicehadoop-httpfs start
  2. [root@slave1 hue]# servicehue start

日志查看位置:/var/log/hue/error.log

访问地址:192.168.211.179:8888

四、 参考文档:

1、Configuring CDH Components for Hue

http://www.cloudera.com/content/cloudera/en/documentation/cdh4/latest/CDH4-Installation-Guide/CDH4-Installation-Guide.html

2、Hue 安装及使用教程

http://blog.csdn.net/nsrainbow/article/details/ 

3、安装和配置Hue

http://itindex.net/detail/52831-hue 

4、hue 问题记录

http://m.oschina.net/blog/

 

如出现如下异常:

异常信息:

“UnicodeDecodeError:’ascii’ codec can’t decode byte 0xe9 in position 0: ordinal not inrange(128)”

 

File “/usr/share/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py”,line 100, in get_response

    response =callback(request, *callback_args, callback_kwargs)

  File”/usr/share/hue/apps/beeswax/src/beeswax/views.py”, line 60, in index

    returnexecute_query(request)

  File”/usr/share/hue/apps/beeswax/src/beeswax/views.py”, line 382, inexecute_query

    dbs =db.get_databases()

  File”/usr/share/hue/apps/beeswax/src/beeswax/server/dbms.py”, line 119,in get_databases

    returnself.client.get_databases()

  File”/usr/share/hue/apps/beeswax/src/beeswax/server/hive_server2_lib.py”,line 581, in get_databases

    return[table[col] for table in self._client.get_databases()]

  File”/usr/share/hue/apps/beeswax/src/beeswax/server/hive_server2_lib.py”,line 324, in get_databases

    res =self.call(self._client.GetSchemas, req)

  File”/usr/share/hue/apps/beeswax/src/beeswax/server/hive_server2_lib.py”,line 289, in call

    session =self.open_session(self.user)

  File”/usr/share/hue/apps/beeswax/src/beeswax/server/hive_server2_lib.py”,line 270, in open_session

    res =self._client.OpenSession(req)

  File”/usr/share/hue/desktop/core/src/desktop/lib/thrift_util.py”, line289, in wrapper

    raiseStructuredException(‘THRIFTAPPLICATION’, str(e), data=None, error_code=502)

 

修改:

/usr/share/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py

/usr/share/hue/apps/beeswax/src/beeswax/views.py

/usr/share/hue/apps/beeswax/src/beeswax/server/dbms.py

/usr/share/hue/apps/beeswax/src/beeswax/server/hive_server2_lib.py

/usr/share/hue/desktop/core/src/desktop/lib/thrift_util.py

  
  1. import sys
  2. default_encoding = 'utf-8'
  3. if sys.getdefaultencoding() != default_encoding:
  4. reload(sys)
  5. sys.setdefaultencoding(default_encoding)
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请联系我们举报,一经查实,本站将立刻删除。

发布者:全栈程序员-站长,转载请注明出处:https://javaforall.net/230678.html原文链接:https://javaforall.net

(0)
上一篇 2026年2月9日 下午12:01
下一篇 2026年2月9日 下午12:22


相关推荐

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

关注全栈程序员社区公众号