不推荐将hue和mysql装在同一台机器,因为会有依赖冲突,不好解决
安装在root用户下用yum安装所依赖的系统包 [root@hadoop001 hue-3.12.0]# yum -y install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi gcc gcc-c++ krb5-devel libtidy libxml2-devel libxslt-devel openldap-devel python-devel sqlite-devel openssl-devel mysql-devel gmp-devel [root@hadoop001 apps]# tar -zxvf hue-3.12.0.gz [root@hadoop001 apps]# cd hue-3.12.0 [root@hadoop001 hue-3.12.0]# make apps 可能会报一系列错,运行以下命令: yum install -y gcc openssl-devel yum install -y gcc gcc-c++ kernel-devel yum install -y libxslt-devel yum install -y gmp-devel yum install -y sqlite-devel yum install -y libffi-devel openssl-devel yum install -y openldap-devel yum install -y mysql-server mysql mysql-devel 再次make apps进行编译
<property> <name>hadoop.http.staticuser.user</name> <value>hadoop</value> </property> <property> <name>hadoop.proxyuser.root.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>*</value> </property>
hdfs-site.xml:
<property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property>
httpfs-site.xml:
<property> <name>httpfs.proxyuser.root.hosts</name> <value>*</value> </property> <property> <name>httpfs.proxyuser.root.groups</name> <value>*</value> </property>
改完之后分发配置文件并重启hadoop
配置hue文件
[root@hadoop001 hue-3.12.0]# vim desktop/conf/hue.ini 配置[desktop]: secret_key=qiaowenxuandeCSDNboke http_host=hadoop001 http_port=8888 time_zone=Asia/Shanghai 配置hue的数据库为mysql:(此处需注意,一定要配置两个database) 第一个[[database]]:下翻即可发现 engine=mysql host=hadoop000 port=3306 user=root password= name=hue 第二个[[database]]: 通过搜索postgresql_psycopg2, mysql, sqlite3 or oracle.得到 相同的配置 在mysql创建相应的hue表。 msyql>create database hue ; 初始化数据表: 同步数据库: [root@hadoop001 hue-3.12.0]# build/env/bin/hue syncdb 导入数据,主要包括oozie、pig、desktop所需要的表: [root@hadoop001 hue-3.12.0]# build/env/bin/hue migrate 查看mysql中是否生成表: msyql>show tables ;
配置HDFS:
搜索 [[hdfs_clusters 直达: fs_defaultfs=hdfs://hadoop000:8020 logical_name=root webhdfs_url=http://hadoop000:50070/webhdfs/v1 hadoop_conf_dir=/apps/hadoop-2.9.0/etc/hadoop hadoop_conf_dir=/apps/hadoop-2.9.0/etc/hadoop hadoop_bin=/apps/hadoop-2.9.0/bin hadoop_hdfs_home=/apps/hadoop-2.9.0
此时启动hue进程:
[root@hadoop001 hue-3.12.0]# build/env/bin/supervisor
在hue.ini搜索:Webserver runs as this user,将默认配置改为登录用户root # Webserver runs as this user server_user=root server_group=root # This should be the Hue admin and proxy user default_user=root
配置ResourceManager:
搜索 [[yarn_clusters 直达 找到[[[ha]]] 修改的是高可用配置 logical_name=my-rm-name submit_to=True resourcemanager_api_url=http://hadoop000:8088
配置hive
搜索 [beeswax 直达 hive_server_host=hadoop000 hive_server_port=10000 hive_conf_dir=/apps/hive-2.3.6/conf
后台启动的三种方式: nohup bin/hiveserver2 1 > logs/hiveserver2.log 2 > logs/hiveserver2.err & nohup hiveserver2 1>/dev/null 2>/dev/null & nohup hiveserver2 >/dev/null 2>&1 &
yum install cyrus-sasl-plain cyrus-sasl-devel cyrus-sasl-gssapi 或 sudo yum install apache-maven ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi gcc gcc-c++ krb5-devel libxml2-devel libxslt-devel make mysql mysql-devel openldap-devel python-devel sqlite-devel gmp-devel
配置hbase
hbase配置的是thriftserver2服务器地址,不是master地址,而且需要用小括号包起来。thriftserver需要单独启动。
搜索 [hbase 直达
hbase_clusters=(hadoop005:9090) hbase_conf_dir=/apps/hbase-1.4.11/conf
启动thriftserver服务器
[root@hadoop005 hbase-1.4.11]# hbase-daemon.sh start thrift
配置spark
[root@hadoop001 apps]# wget http://mirrors.tuna.tsinghua.edu.cn/apache/incubator/livy/0.5.0-incubating/livy-0.5.0-incubating-bin.zip [root@hadoop001 apps]# unzip livy-0.5.0-incubating-bin.zip [root@hadoop001 livy-0.5.0-incubating-bin]# bin/livy-server &
搜索 Spark application 直达 livy_server_host=hadoop001 livy_server_port=8998 livy_server_session_kind=spark://hadoop000:7077
配置zookeeper
搜索 [zookeeper 直达 host_ports=hadoop005:2181,hadoop006:2181,hadoop007:2181
发布者:全栈程序员-站长,转载请注明出处:https://javaforall.net/221142.html原文链接:https://javaforall.net
