Kerberos安装及集成hdfs、hive

Kerberos安装及集成hdfs、hive环境 1 台 Centos6 5 主机部署 masterKDC2 台 Centos6 5 主机部署 KerberosClie Master 主机安装 Kerberosyumi serverkrb5 libskrb5 workstation y1 1 配置 kdc confvim var kerberos krb5kdc kdc conf kdcde

环境:

1、Master主机安装Kerberos

yum install krb5-server krb5-libs krb5-workstation -y 

1.1 配置kdc.conf

vim /var/kerberos/krb5kdc/kdc.conf [kdcdefaults] kdc_ports = 88 kdc_tcp_ports = 88 [realms] HADOOP.COM = { 
    #master_key_type = aes256-cts acl_file = /var/kerberos/krb5kdc/kadm5.acl dict_file = /usr/share/dict/words admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab max_renewable_life = 7d supported_enctypes = aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal } 

说明:

1.2 配置krb5.conf

vim /etc/krb5.conf # Configuration snippets may be placed in this directory as well includedir /etc/krb5.conf.d/ [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = HADOOP.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true clockskew = 120 udp_preference_limit = 1 [realms] HADOOP.COM = { 
    kdc = node01 admin_server = node01 } [domain_realm] .hadoop.com = HADOOP.COM hadoop.com = HADOOP.COM 

说明:

1.3 初始化kerberos database

kdb5_util create -s -r HADOOP.COM 

其中,[-s]表示生成stash file,并在其中存储master server key(krb5kdc);还可以用[-r]来指定一个realm name —— 当krb5.conf中定义了多个realm时才是必要的。

1.4 修改database administrator的ACL权限

vim /var/kerberos/krb5kdc/kadm5.acl #修改如下 */ * 

1.5 启动kerberos daemons并设置开机启动

service krb5kdc start service kadmin start chkconfig krb5kdc on chkconfig kadmin on 

2. 在另外两台主机部署Kerberos Client

yum install krb5-workstation krb5-libs -y #从master主机复制krb5.conf到这两台主机 scp /etc/krb5.conf node02:/etc/krb5.conf scp /etc/krb5.conf node03:/etc/krb5.conf 

3. kerberos的日常操作

3.1先配置下root/admin密码

[root@node-1 ~]# kadmin.local Authenticating as principal root/ with password. kadmin.local: addprinc root/admin WARNING: no policy specified for root/; defaulting to no policy Enter password for principal "root/": Re-enter password for principal "root/": Principal "root/" created. kadmin.local: listprincs K/ kadmin/ kadmin/ kadmin/ kiprop/ krbtgt/ root/ kadmin.local: exit 

3.2新加用户hd1:

[root@node-3 ~]# kadmin Authenticating as principal root/ with password. Password for root/: kadmin: addprinc hd1 WARNING: no policy specified for ; defaulting to no policy Enter password for principal "": Re-enter password for principal "": Principal "" created. kadmin: exit 

=================================================================================

4.hdfs集成kerberos

4.1在KDC上创建kerberos实例

4.1.1以root用户,输入kadmin.local进入kerberos命令行,在kerberos数据库中生成实例
addprinc -randkey hadoop/ addprinc -randkey hadoop/ addprinc -randkey hadoop/ addprinc -randkey HTTP/ addprinc -randkey HTTP/ addprinc -randkey HTTP/ 
4.1.2退出kerberos命令行,以root用户,为各实例生成密钥
kadmin.local -q "xst -k hadoop.keytab hadoop/" kadmin.local -q "xst -k hadoop.keytab hadoop/" kadmin.local -q "xst -k hadoop.keytab hadoop/" kadmin.local -q "xst -k HTTP.keytab HTTP/" kadmin.local -q "xst -k HTTP.keytab HTTP/" kadmin.local -q "xst -k HTTP.keytab HTTP/" 

此时,生成的keytab都在root根目录下。

4.1.3在root命令行,合并hadoop.keytab和HTTP.keytab为hdfs.keytab。
 ktutil rkt hadoop.keytab rkt HTTP.keytab wkt hdfs.keytab 

将hdfs.keytab文件复制到/home/hadoop/目录。并向hadoop各节点分发。

4.1.4注意

a. 主机名一定要小写,通过kinit申请TGT票据时kerberos会将无论大小写都视为小写,生成票据。而实例中如果是大写主机名,则会使得kerberos库中对应的实例无法产生票据。

b. hadoop/namenode,其中的namenode只是作为一个标志,代表这个hadoop用户与另一个hadoop用户属于不同的主机而已。所有,如果hostname为大写,那么此处仍必须以小写作为输入。

4.2hdfs集成kerberos。(先停止集群)

4.2.1修改core_site.xml
 <property> <name>hadoop.security.authentication</name> <value>kerberos</value> </property> <property> <name>hadoop.security.authorization</name> <value>true</value> </property> 

以上配置,表示开启安全认证功能并且采用kerberos认证。

4.2.2修改hdfs-site.xml
<property> <name>dfs.block.access.token.enable</name> <value>true</value> </property> <property> <name>dfs.datanode.data.dir.perm</name> <value>700</value> </property> <property> <name>dfs.namenode.keytab.file</name> <value>/root/hadoop/hdfs.keytab</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>hadoop/</value> </property> <property> <name>dfs.namenode.kerberos.https.principal</name> <value>HTTP/</value> </property> <!-- <property> <name>dfs.datanode.address</name> <value>0.0.0.0:1004</value> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:1006</value> </property> --> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:61004</value> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:61006</value> </property> <property> <name>dfs.http.policy</name> <value>HTTPS_ONLY</value> </property> <property> <name>dfs.data.transfer.protection</name> <value>integrity</value> </property> <property> <name>dfs.datanode.keytab.file</name> <value>/root/hadoop/hdfs.keytab</value> </property> <property> <name>dfs.datanode.kerberos.principal</name> <value>hadoop/</value> </property> <property> <name>dfs.datanode.kerberos.https.principal</name> <value>HTTP/</value> </property> <property> <name>dfs.journalnode.keytab.file</name> <value>/root/hadoop/hdfs.keytab</value> </property> <property> <name>dfs.journalnode.kerberos.principal</name> <value>hadoop/</value> </property> <property> <name>dfs.journalnode.kerberos.internal.spnego.principal</name> <value>HTTP/</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> <property> <name>dfs.web.authentication.kerberos.principal</name> <value>HTTP/</value> </property> <property> <name>dfs.web.authentication.kerberos.keytab</name> <value>/root/hadoop/hdfs.keytab</value> </property> 
4.2.3在配置完上面的配置文件,启动后报如下错误
java.lang.RuntimeException: Cannot start secure DataNode without configuring either privileged resources or SASL RPC data transfer protection and SSL for HTTP. Using privileged resources in combination with SASL RPC data transfer protection is not supported. at org.apache.hadoop.hdfs.server.datanode.DataNode.checkSecureConfig(DataNode.java:1201) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1101) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:429) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2406) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2293) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2340) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2522) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2546) 2018-03-13 14:01:27,317 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2018-03-13 14:01:27,318 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 

这时候又报错:

2018-03-09 20:44:10,993 INFO org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler: Login using keytab /etc/hadoop/conf/hdfs-service.keytab, for principal HTTP/ 2018-03-09 20:44:11,000 INFO org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler: Login using keytab /etc/hadoop/conf/hdfs-service.keytab, for principal HTTP/ 2018-03-09 20:44:11,003 WARN org.mortbay.log: failed SslSelectChannelConnectorSecure@0.0.0.0:50470: java.io.FileNotFoundException: /home/kduser/.keystore (No such file or directory) 2018-03-09 20:44:11,003 WARN org.mortbay.log: failed Server@10ded6a9: java.io.FileNotFoundException: /home/kduser/.keystore (No such file or directory) 2018-03-09 20:44:11,003 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException java.io.FileNotFoundException: /home/kduser/.keystore (No such file or directory) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.<init>(FileInputStream.java:138) at org.mortbay.resource.FileResource.getInputStream(FileResource.java:275) at org.mortbay.jetty.security.SslSelectChannelConnector.createSSLContext(SslSelectChannelConnector.java:624) at org.mortbay.jetty.security.SslSelectChannelConnector.doStart(SslSelectChannelConnector.java:598) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.Server.doStart(Server.java:235) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:877) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:760) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:639) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:819) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:803) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1500) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1566) 2018-03-09 20:44:11,006 INFO org.mortbay.log: Stopped SslSelectChannelConnectorSecure@0.0.0.0:50470 2018-03-09 20:44:11,107 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2018-03-09 20:44:11,108 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2018-03-09 20:44:11,108 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2018-03-09 20:44:11,108 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.FileNotFoundException: /home/kduser/.keystore (No such file or directory) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.<init>(FileInputStream.java:138) at org.mortbay.resource.FileResource.getInputStream(FileResource.java:275) at org.mortbay.jetty.security.SslSelectChannelConnector.createSSLContext(SslSelectChannelConnector.java:624) at org.mortbay.jetty.security.SslSelectChannelConnector.doStart(SslSelectChannelConnector.java:598) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.Server.doStart(Server.java:235) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:877) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:760) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:639) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:819) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:803) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1500) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1566) 2018-03-09 20:44:11,110 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2018-03-09 20:44:11,111 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: / SHUTDOWN_MSG: Shutting down NameNode at v-hadoop-kbds.sz.kingdee.net/172.20.178.28 / 

这需要配置HTTPS。

4.2.4配置HTTPS

在 node01生成ca并拷贝至node02,node03。 (密码随便设置,大于6位即可。如)

cd /etc/https openssl req -new -x509 -keyout hdfs_ca_key -out hdfs_ca_cert -days 9999 -subj '/C=CN/ST=beijing/L=chaoyang/O=lecloud/OU=dt/CN=jenkin.com' scp hdfs_ca_key hdfs_ca_cert node02:/etc/https/ scp hdfs_ca_key hdfs_ca_cert node03:/etc/https/ 

在每一台机器上生成 keystore,和trustores(中间需要输入密码,这里我全部设置成了)

// 生成 keystore keytool -keystore keystore -alias localhost -validity 9999 -genkey -keyalg RSA -keysize 2048 -dname "CN=${fqdn}, OU=DT, O=DT, L=CY, ST=BJ, C=CN" // 添加 CA 到 truststore keytool -keystore truststore -alias CARoot -import -file hdfs_ca_cert // 从 keystore 中导出 cert keytool -certreq -alias localhost -keystore keystore -file cert // 用 CA 对 cert 签名 openssl x509 -req -CA hdfs_ca_cert -CAkey hdfs_ca_key -in cert -out cert_signed -days 9999 -CAcreateserial // 将 CA 的 cert 和用 CA 签名之后的 cert 导入 keystore keytool -keystore keystore -alias CARoot -import -file hdfs_ca_cert keytool -keystore keystore -alias localhost -import -file cert_signed 

将最终keystore,trustores放入合适的目录,并加上后缀

cp keystore /etc/https/keystore.jks cp truststore /etc/https/truststore.jks 

修改hdfs-site.xml(datanode与namenode混合部署是,需要 HTTPS_ONLY ,前面已经设置过,这里不需要再次设置)

<property> <name>dfs.http.policy</name> <value>HTTP_AND_HTTPS</value> <!-- <value>HTTPS_ONLY</value> --> </property> 

配置ssl-client.xml

<configuration> <property> <name>ssl.client.truststore.location</name> <value>/etc/https/truststore.jks</value> <description>Truststore to be used by clients like distcp. Must be specified.</description> </property> <property> <name>ssl.client.truststore.password</name> <value></value> <description>Optional. Default value is "".</description> </property> <property> <name>ssl.client.truststore.type</name> <value>jks</value> <description>Optional. The keystore file format, default value is "jks".</description> </property> <property> <name>ssl.client.truststore.reload.interval</name> <value>10000</value> <description>Truststore reload check interval, in milliseconds.Default value is 10000 (10 seconds).</description> </property> <property> <name>ssl.client.keystore.location</name> <value>/etc/https/keystore.jks</value> <description>Keystore to be used by clients like distcp. Must bespecified.</description> </property> <property> <name>ssl.client.keystore.password</name> <value></value> <description>Optional. Default value is "".</description> </property> <property> <name>ssl.client.keystore.keypassword</name> <value></value> <description>Optional. Default value is "".</description> </property> <property> <name>ssl.client.keystore.type</name> <value>jks</value> <description>Optional. The keystore file format, default value is "jks".</description> </property> </configuration> 

配置ssl-server.xml

<configuration> <property> <name>ssl.server.truststore.location</name> <value>/etc/https/truststore.jks</value> <description>Truststore to be used by NN and DN. Must be specified.</description> </property> <property> <name>ssl.server.truststore.password</name> <value></value> <description>Optional. Default value is "".</description> </property> <property> <name>ssl.server.truststore.type</name> <value>jks</value> <description>Optional. The keystore file format, default value is "jks".</description> </property> <property> <name>ssl.server.truststore.reload.interval</name> <value>10000</value> <description>Truststore reload check interval, in milliseconds.Default value is 10000 (10 seconds).</description> </property> <property> <name>ssl.server.keystore.location</name> <value>/etc/https/keystore.jks</value> <description>Keystore to be used by NN and DN. Must be specified.</description> </property> <property> <name>ssl.server.keystore.password</name> <value></value> <description>Must be specified.</description> </property> <property> <name>ssl.server.keystore.keypassword</name> <value></value> <description>Must be specified.</description> </property> <property> <name>ssl.server.keystore.type</name> <value>jks</value> <description>Optional. The keystore file format, default value is "jks".</description> </property> </configuration> 

4.3不出意外的话,到这里就可以成功启动hdfs了(可能无法访问50070页面)

4.4测试java访问hdfs

public static void main(String[] args) throws Exception { 
    Configuration conf = new Configuration(); conf.addResource(new Path("D:/.../hdfs-site.xml")); conf.addResource(new Path("D:/.../core-site.xml")); System.setProperty("java.security.krb5.conf", "D:/.../krb5.conf"); UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab("hadoop/", "D:/.../hdfs.keytab"); //获取带有kerberos验证的文件系统类 FileSystem fileSystem1 = FileSystem.get(conf); //测试访问情况 Path path=new Path("hdfs://node01:8020/user"); if(fileSystem1.exists(path)){ 
    System.out.println("===contains==="); } RemoteIterator<LocatedFileStatus> list=fileSystem1.listFiles(path,true); while (list.hasNext()) { 
    LocatedFileStatus fileStatus = list.next(); System.out.println(fileStatus.getPath()); } } 

=================================================================================

5.hive集成kerberos

5.1创建实例,生成keyTab

kadmin.local -q "addprinc -randkey hive/ " kadmin.local -q "addprinc -randkey hive/ " kadmin.local -q "addprinc -randkey hive/ " kadmin.local -q "xst -k hive.keytab hive/ " kadmin.local -q "xst -k hive.keytab hive/ " kadmin.local -q "xst -k hive.keytab hive/ " 

5.2修改 hive-site.xml,添加下面配置

<property> <name>hive.server2.authentication</name> <value>KERBEROS</value> </property> <property> <name>hive.server2.authentication.kerberos.principal</name> <value>hive/</value> </property> <property> <name>hive.server2.authentication.kerberos.keytab</name> <value>/root/hadoop/hive.keytab</value> </property> <property> <name>hive.metastore.sasl.enabled</name> <value>true</value> </property> <property> <name>hive.metastore.kerberos.keytab.file</name> <value>/root/hadoop/hive.keytab</value> </property> <property> <name>hive.metastore.kerberos.principal</name> <value>hive/</value> </property> 

5.3修改hadoop的core-site.xml

<property> <name>hadoop.proxyuser.hive.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hive.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hdfs.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hdfs.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.HTTP.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.HTTP.groups</name> <value>*</value> </property> 

5.4将修改的上面文件同步到其他节点:node02、node03,并一一检查权限是否正确

5.5启动hive服务

./hive --service metastore ./hive --service hiveserver2 

5.6测试Java访问hive

import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.security.UserGroupInformation; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; public class KBSimple { 
    private static String JDBC_DRIVER = "org.apache.hive.jdbc.HiveDriver"; private static String CONNECTION_URL ="jdbc:hive2://node01:10000/;principal=hive/"; static { 
    try { 
    Class.forName(JDBC_DRIVER); } catch (ClassNotFoundException e) { 
    e.printStackTrace(); } } public static void main(String[] args) throws Exception { 
    Class.forName(JDBC_DRIVER); //登录Kerberos账号 System.setProperty("java.security.krb5.conf", "D:\\...\\krb5.conf"); Configuration conf = new Configuration(); conf.set("hadoop.security.authentication" , "Kerberos" ); conf.addResource(new Path("D:/.../hive-site.xml")); conf.addResource(new Path("D:/.../core-site.xml")); UserGroupInformation. setConfiguration(conf); UserGroupInformation.loginUserFromKeytab("hive/", "D:\\...\\hive.keytab"); Connection connection = null; ResultSet rs = null; PreparedStatement ps = null; try { 
    connection = DriverManager.getConnection(CONNECTION_URL); ps = connection.prepareStatement("select * from table1"); rs = ps.executeQuery(); while (rs.next()) { 
    System.out.println(rs.getString(1)); } } catch (Exception e) { 
    e.printStackTrace(); } } } 

参考:

(1)Kerberos 基本安装与配置

https://blog.csdn.net/dyq51/article/details/

(2)kerberos安装

https://www.jianshu.com/p/f84cb

(3)Hadoop_Kerberos配置过程记录

https://blog.csdn.net/Regan_Hoo/article/details/

(4)HDFS集群整合Kerberos配置步骤

https://blog.csdn.net/gangchengzhong/article/details/

(5)Hive配置Kerberos认证

https://blog.csdn.net/a/article/details/

(6)hdfs集成Kerberos

https://www.jianshu.com/p/ef6f16546b98

(7)hadoop https配置

https://www.cnblogs.com/kisf/p/7573561.html

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请联系我们举报,一经查实,本站将立刻删除。

发布者:全栈程序员-站长,转载请注明出处:https://javaforall.net/212899.html原文链接:https://javaforall.net

(0)
上一篇 2026年3月18日 下午7:00
下一篇 2026年3月18日 下午7:00


相关推荐

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

关注全栈程序员社区公众号