安装说明:我是自己搭建的三台虚拟机hadoop01/hadoop02/hadoop03,生成CA证书hdfs_ca_key和hdfs_ca_cert只需要在任意一台节点上完成即可,其他每个节点包括生成证书的节点都需要执行第四步以后的操作,且必须使用root用户执行以下操作:
1.在hadoop01节点生成CA证书,需要输入两次密码,其中CN:中国简称;ST:省份;L:城市;O和OU:公司或个人域名;hadoop01是生成CA证书主机名
openssl req -new -x509 -keyout hdfs_ca_key -out hdfs_ca_cert -days 9999 -subj /C=CN/ST=shanxi/L=xian/O=hlk/OU=hlk/CN=hadoop01
2.将hadoop01节点上生成的CA证书hdfs_ca_key、hdfs_ca_cert分发到每个节点上的/tmp目录下
scp hdfs_ca_key hdfs_ca_cert $host:/tmp
3.发送完成后删除hadoop01节点上CA证书
rm -rf hdfs_ca_key hdfs_ca_cert
name="CN=$HOSTNAME, OU=hlk, O=hlk, L=xian, ST=shanxi, C=CN" #需要输入第一步输入的密码四次 keytool -keystore keystore -alias localhost -validity 9999 -genkey -keyalg RSA -keysize 2048 -dname "$name"
4.2 添加CA到truststore,同样需要输入密码
keytool -keystore truststore -alias CARoot -import -file hdfs_ca_cert
4.3 从keystore中导出cert
keytool -certreq -alias localhost -keystore keystore -file cert
4.4 用CA对cert签名
openssl x509 -req -CA hdfs_ca_cert -CAkey hdfs_ca_key -in cert -out cert_signed -days 9999 -CAcreateserial
4.5 将CA的cert和用CA签名之后的cert导入keystore
keytool -keystore keystore -alias CARoot -import -file hdfs_ca_cert keytool -keystore keystore -alias localhost -import -file cert_signed
4.6 将最终keystore,trustores放入合适的目录,并加上后缀jks
mkdir -p /etc/security/https && chmod 755 /etc/security/https cp keystore /etc/security/https/keystore.jks cp truststore /etc/security/https/truststore.jks
4.7 删除/tmp目录下产生的垃圾数据文件
rm -f keystore truststore hdfs_ca_key hdfs_ca_cert.srl hdfs_ca_cert cert_signed cert
5.配置$HADOOP_HOME/etc/hadoop/ssl-server.xml和ssl-client.xml文件
注:这两个配置文件在一台节点配好,发送到其他节点对应位置下!
5.1 配置$HADOOP_HOME/etc/hadoop/ssl-client.xml文件
ssl-client.xml
ssl.client.truststore.location
/etc/security/https/truststore.jks
Truststore to be used by clients like distcp. Must be specified.
ssl.client.truststore.password
hadoop
Optional. Default value is "".
ssl.client.truststore.type
jks
Optional. The keystore file format, default value is "jks".
ssl.client.truststore.reload.interval
10000
Truststore reload check interval, in milliseconds.Default value is 10000 (10 seconds).
ssl.client.keystore.location
/etc/security/https/keystore.jks
Keystore to be used by clients like distcp. Must be specified.
ssl.client.keystore.password
hadoop
Optional. Default value is "".
ssl.client.keystore.keypassword
hadoop
Optional. Default value is "".
ssl.client.keystore.type
jks
Optional. The keystore file format, default value is "jks".
5.2 配置$HADOOP_HOME/etc/hadoop/ssl-server.xml文件
ssl-server.xml
ssl.server.truststore.location
/etc/security/https/truststore.jks
Truststore to be used by NN and DN. Must be specified.
ssl.server.truststore.password
hadoop
Optional. Default value is "".
ssl.server.truststore.type
jks
Optional. The keystore file format, default value is "jks".
ssl.server.truststore.reload.interval
10000
Truststore reload check interval, in milliseconds. Default value is 10000 (10 seconds).
ssl.server.keystore.location
/etc/security/https/keystore.jks
Keystore to be used by NN and DN. Must be specified.
ssl.server.keystore.password
hadoop
Must be specified.
ssl.server.keystore.keypassword
hadoop
Must be specified.
ssl.server.keystore.type
jks
Optional. The keystore file format, default value is "jks".
PS:因为上述操作中需要不停的输入密码,有太多的人机交互,而且集群节点多了也太过于麻烦,所以本人写了一个shell脚本,可以直接运行安装HTTP服务,仅供需要的人参考!
#! /bin/bash #集群中安装https function make_CA(){ echo 'make_CA begin ...' cd ~ #删除之前可能产生的过期CA证书 rm -rf hdfs_ca* #hadoop01上生成CA,密码全部为hadoop /usr/bin/expect <<-EOF set timeout 30 spawn openssl req -new -x509 -keyout hdfs_ca_key -out hdfs_ca_cert -days 9999 -subj /C=CN/ST=shanxi/L=xian/O=hlk/OU=hlk/CN=hadoop01 expect { "*PEM pass phrase*" {send "hadoop\r"; exp_continue} "*Enter PEM pass phrase:*" { send "hadoop\r"; exp_continue} } EOF #将hadoop01节点上生成的CA证书hdfs_ca_key、hdfs_ca_cert分发到其他节点上 hosts=`sed -n 3,5p /etc/hosts | awk '{print $2}'` for host in $hosts; do echo "copy hadoop CA to $host:/tmp" scp hdfs_ca_* $host:/tmp done rm -rf hdfs_ca* echo 'make_CA end ...' } #在每一条机器上生成keystore和trustores function make_certificate(){ cd /tmp #keytool需要使用java环境 source /home/hadoop/.bashrc #生成keystore name="CN=$HOSTNAME, OU=hlk, O=hlk, L=xian, ST=shanxi, C=CN" /usr/bin/expect <<-EOF spawn keytool -keystore keystore -alias localhost -validity 9999 -genkey -keyalg RSA -keysize 2048 -dname "$name" expect { "*keystore password*" {send "hadoop\r"; exp_continue} "*new password:*" {send "hadoop\r"; exp_continue} "*as keystore password):*" {send "hadoop\r"; exp_continue} "*Re-enter new password:*" {send "hadoop\r"; exp_continue} } EOF #添加CA到truststore /usr/bin/expect <<-EOF spawn keytool -keystore truststore -alias CARoot -import -file hdfs_ca_cert expect { "*keystore password:*" {send "hadoop\r"; exp_continue} "*new password:*" {send "hadoop\r"; exp_continue} "*Trust this certificate*" {send "yes\r"; exp_continue} } EOF #从keystore中导出cert /usr/bin/expect <<-EOF spawn keytool -certreq -alias localhost -keystore keystore -file cert expect { "*keystore password:*" {send "hadoop\r"; exp_continue} } EOF #用CA对cert签名 /usr/bin/expect <<-EOF spawn openssl x509 -req -CA hdfs_ca_cert -CAkey hdfs_ca_key -in cert -out cert_signed -days 9999 -CAcreateserial expect { "*phrase for hdfs_ca_key:*" {send "hadoop\r"; exp_continue} } EOF #将CA的cert和用CA签名之后的cert导入keystore /usr/bin/expect <<-EOF spawn keytool -keystore keystore -alias CARoot -import -file hdfs_ca_cert expect { "*keystore password:*" {send "hadoop\r"; exp_continue} "*Trust this certificate*" {send "yes\r"; exp_continue} } EOF /usr/bin/expect <<-EOF spawn keytool -keystore keystore -alias localhost -import -file cert_signed expect { "*keystore password:*" {send "hadoop\r"; exp_continue} } EOF #将最终keystore,trustores放入合适的目录,并加上后缀jks rm -rf /etc/security/https && mkdir -p /etc/security/https chmod 755 /etc/security/https echo "install keystore、truststore to /etc/security/https/..." cp keystore /etc/security/https/keystore.jks cp truststore /etc/security/https/truststore.jks #删除产生的垃圾数据文件 rm -f keystore truststore hdfs_ca_key hdfs_ca_cert.srl hdfs_ca_cert cert_signed cert } function main(){ echo "[+] execute hlk_each_host_install_https.sh begin ..." #必须使用root用户执行 if [ "x$USER" != "xroot" ];then echo "[-] Installation of HTTPS must be performed with the root user ..." return fi if [ "x$HOSTNAME" == "xhadoop01" ];then #只在hadoop01节点上安装CA证书 make_CA fi #每个节点获取CA证书签照 make_certificate #配置$HADOOP_HOME/etc/hadoop/下的ssl-server.xml和ssl-client.xml配置文件 cp /home/hadoop/conf/hadoop/ssl-*.xml /home/hadoop/core/hadoop-2.7.6/etc/hadoop/ echo "[+] execute hlk_each_host_install_https.sh end ..." } main
如果对你有过帮助,请记得留下你宝贵的赞,给我继续记录的动力!
发布者:全栈程序员-站长,转载请注明出处:https://javaforall.net/201317.html原文链接:https://javaforall.net
