`
chensl
  • 浏览: 55945 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

虚拟机中四台Ubuntu安装配置Hadoop(上)

阅读更多

1. 环境

(1)硬件:Intel i5 3450,8G内存   

(2)软件:

         win7操作系统

         虚拟机 VMware 9

         虚拟机操作系统 :64位Ubuntu 12.04.01

         64位JDK,jdk-6u37-linux-x64.bin   (安装过程参考这里) 

(3)服务器部署

 (待补充)

 

2.  设置各节点

  (待补充)

 

3. 配置ssh

(1) Ubuntu中已经默认安装了ssh客户端,各个Ubuntu中都需要安装ssh服务,安装过程参考这里

(2) 创建hadoop用户(安装hadoop的几台服务器或者说Ubuntu虚拟机都需要创建同样的hadoop用户)

 

chensl@namenode:~$ sudo addgroup hadoop
[sudo] password for chensl: 
Adding group `hadoop' (GID 1001) ...
Done.
chensl@namenode:~$ sudo adduser --ingroup hadoop hadoop
Adding user `hadoop' ...
Adding new user `hadoop' (1001) with group `hadoop' ...
Creating home directory `/home/hadoop' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 

 输入密码 : hadoop

passwd: password updated successfully
Changing the user information for hadoop
Enter the new value, or press ENTER for the default
	Full Name []: hadoop
	Room Number []:       
	Work Phone []: 
	Home Phone []: 
	Other []: 
Is the information correct? [Y/n] Y

 输入Y 创建成功

当新添加的用户hadoop使用sudo命令时,会提示如下内容:

hadoop is not in the sudoers file.  This incident will be reported.

 表明hadoop用户还无法运行sudo的命令 ,所以需要做下的工作:

      (注:新装的ubuntu还未设定root密码,需要先使用原来账户执行sudo passwd root 命令来设定root的密码后,才可以su到root用户):
执行命令:

su -       (如果是用“su ”而不是 “su - ” 切换成root的话,可能无法编辑sudoers文件)

输入密码登录成功,执行

ls -l /etc/sudoers

 显示:

  -r--r----- 1 root root 723 Jan 31  2012 /etc/sudoers

执行下面代码修改文件属性可写:

chmod u+w /etc/sudoers    (修改文件属性可写)

 此时可查看到:

root@NameNode01:~# ls -l /etc/sudoers   (查看属性)
-rw-r----- 1 root root 723 Jan 31  2012 /etc/sudoers

 继续执行下面的操作:

gedit /etc/sudoers      (编辑文件)

 在 root  ALL=(ALL:ALL)  ALL  后面添加

hadoop ALL=(ALL) ALL

chmod u-w /etc/sudoers   (修改文件为原来不可写属性)
exit                     (退出root管理员身份)

(3) 建立ssh key:

su - hadoop                  (切换到hadoop用户)
ssh-keygen -t rsa -P ""      (这个命令将为Ubuntu系统上的hadoop用户产生密钥对id_rsa
                               和id_rsa.pub)

 输出结果:

chensl@NameNode01:~$ su - hadoop
Password:
hadoop@NameNode01:~$ ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Created directory '/home/hadoop/.ssh'.
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
ab:88:59:08:33:3e:c6:59:17:a6:58:46:61:a8:60:65 hadoop@NameNode01
The key's randomart image is:
+--[ RSA 2048]----+
|  oE.            |
|.o+              |
|+  o o           |
|. + o .          |
|+. o .  S        |
|o+o..    .       |
| *. .   .        |
|. .+ . .         |
|  o . .          |
+-----------------+

(4)建立几台Ubuntu之间的ssh key无密码登录认证

在其他几台Ubuntu中hadoop主目录下 新建 .ssh目录,即 /home/hadoop/.ssh  的目录,(如果尚未有hadoop用户,需参照前面的步骤创建hadoop用户)

 

hadoop@DataNode01:~$ pwd
/home/hadoop                       (在hadoop的主目录下)
hadoop@DataNode01:~$ mkdir .ssh    (在DataNode01上执行)
hadoop@NN02:~$ mkdir .ssh          (在NN02上执行)
hadoop@DN02:~$ mkdir .ssh          (在DN02上执行)

 

将NameNode01上的  id_rsa.pub 传到另外三台Ubuntu的对应目录下,以DataNode01为例:

scp /home/hadoop/.ssh/id_rsa.pub  DataNode01:/home/hadoop/.ssh

 输出如下:

hadoop@NameNode01:~$ scp /home/hadoop/.ssh/id_rsa.pub  DataNode01:/home/hadoop/.ssh
The authenticity of host 'datanode01 (192.168.0.112)' can't be established.
ECDSA key fingerprint is ad:c8:15:c2:d5:af:24:aa:a2:be:34:9f:51:9b:d9:23.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'datanode01,192.168.0.112' (ECDSA) to the list of known hosts.
hadoop@datanode01's password:
id_rsa.pub                                    100%  399     0.4KB/s   00:00   

登录 DataNode01 查看 /home/hadoop/.ssh目录下:

hadoop@DataNode01:~$ ls -l .ssh    
total 4
-rw-r--r-- 1 hadoop hadoop 399 Feb 21 10:56 id_rsa.pub    (显示结果)

 

同样,在NameNode01上执行命令

scp /home/hadoop/.ssh/id_rsa.pub  DN02:/home/hadoop/.ssh
scp /home/hadoop/.ssh/id_rsa.pub  NN02:/home/hadoop/.ssh

 也把id_rsa.pub放置到对应目录下。

 

然后分别登录DataName01, DN02, NN02,将:/home/hadoop/.ssh/ 目录下的 id_rsa.pub 重命名成authorized_keys 执行下面的命令:

 

hadoop@DataNode01:~$ mv /home/hadoop/.ssh/id_rsa.pub  /home/hadoop/.ssh/authorized_keys
(DataNode01上执行)

hadoop@NN02:~$ mv /home/hadoop/.ssh/id_rsa.pub  /home/hadoop/.ssh/authorized_keys
(NN02上执行)

hadoop@DN02:~$ mv /home/hadoop/.ssh/id_rsa.pub  /home/hadoop/.ssh/authorized_keys
(DN02上执行)

 

此时可以在NameNode01上ssh无密码登录其余三台机器了,可以执行下面的命令检验:

hadoop@NameNode01:~$ ssh DataNode01   (执行此命令ssh到DataNode01,显示结果如下:)
The authenticity of host 'DataNode01 (192.168.0.112)' can't be established.

ECDSA key fingerprint is 85:f7:86:6f:3c:cc:30:69:fc:8a:94:87:32:e7:e3:51.

Are you sure you want to continue connecting (yes/no)? yes  (第一次ssh登录需要yes)
Warning: Permanently added 'DataNode01' (ECDSA) to the list of known hosts.

Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-29-generic x86_64)
 * Documentation:  https://help.ubuntu.com/
338 packages can be updated.
104 updates are security updates.
Last login: Thu Feb 21 12:30:31 2013 from namenode01
hadoop@DataNode01:~$     (此时已经由namenode01登录到DataNode01,exit命令退出)

  

NameNode01通过ssh登录其他几台已经不需要密码了,通过下面操作,ssh登录自身也不需要密码。

hadoop@NameNode01:~$ cp /home/hadoop/.ssh/id_rsa.pub  /home/hadoop/.ssh/authorized_keys
(复制)

hadoop@NameNode01:~$ ssh localhost     (ssh登录NameNode01本地)

 

此时NameNode01可以ssh登录任意一台Ubuntu了,另外三台还不能登录NameNode01,下面需要做的是将NameNode01里面生成的id_rsa 放置到另外三台Ubuntu的同等目录下

hadoop@NameNode01:~$ scp /home/hadoop/.ssh/id_rsa  DataNode01:/home/hadoop/.ssh  (执行命令结果如下)
id_rsa                                        100% 1679     1.6KB/s   00:00    
hadoop@NameNode01:~$ scp /home/hadoop/.ssh/id_rsa  NN02:/home/hadoop/.ssh 
(执行命令结果如下)

id_rsa                                        100% 1679     1.6KB/s   00:00    
hadoop@NameNode01:~$ scp /home/hadoop/.ssh/id_rsa  DN02:/home/hadoop/.ssh
(执行命令结果如下)

id_rsa                                        100% 1679     1.6KB/s   00:00    
hadoop@NameNode01:~$ 

 

 此时四台虚拟机Ubuntu之间,任意两台之间可以ssh无密码登录了,配置ssh完成。

分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics