Windows XP Windows 7 Windows 2003 Windows Vista Windows教程綜合 Linux 系統教程
Windows 10 Windows 8 Windows 2008 Windows NT Windows Server 電腦軟件教程
 Windows教程網 >> Linux系統教程 >> Linux系統常見問題解答 >> Linux下部署Hadoop偽分布模式

Linux下部署Hadoop偽分布模式

日期:2017/1/20 17:44:41      編輯:Linux系統常見問題解答

Hadoop版本為1.2.1

  Linux使用Fedora19並使用hadoop賬號安裝

  第一步:配置ssh本地登錄證書(雖然為偽分布模式,Hadoop依然會使用SSH進行通信)

  [hadoop@promote ~]$ which ssh

  /usr/bin/ssh

  [hadoop@promote ~]$ which ssh-keygen

  /usr/bin/ssh-keygen

  [hadoop@promote ~]$ which sshd

  /usr/sbin/sshd

  [hadoop@promote ~]$ ssh-keygen -t rsa

  然後一路回車

  Generating public/private rsa key pair.

  Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):

  Created directory '/home/hadoop/.ssh'.

  Enter passphrase (empty for no passphrase):

  Enter same passphrase again:

  Passphrases do not match.  Try again.

  Enter passphrase (empty for no passphrase):

  Enter same passphrase again:

  Your identification has been saved in /home/hadoop/.ssh/id_rsa.

  Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.

  The key fingerprint is:

  2f:a9:60:c7:dc:38:8f:c7:bb:70:de:d4:39:c3:39:87 [email protected]

  The key's randomart image is:

  +--[ RSA 2048]----+

  |                 |

  |                 |

  |                 |

  |                 |

  |        S        |

  |     o o o o +   |

  |    o B.= o E .  |

  |   . o Oo+   =   |

  |      o.=o.      |

  +-----------------+

  最終將在/home/hadoop/.ssh/路徑下生成私鑰id_rsa和公鑰id_rsa.pub

  [hadoop@promote .ssh]$ cd /home/hadoop/.ssh/

  [hadoop@promote .ssh]$ ls

  id_rsa  id_rsa.pub

  修改sshd服務配置文件:

  [hadoop@promote .ssh]$ su root

  密碼:

  [root@promote .ssh]# vi /etc/ssh/sshd_config

  啟用RSA加密算法驗證(去掉前面的#號)

  RSAAuthentication yes

  PubkeyAuthentication yes

  # The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2

  # but this is overridden so installations will only check .ssh/authorized_keys

  AuthorizedKeysFile      .ssh/authorized_keys


保存並退出,然後重啟sshd服務

  [root@promote .ssh]# service sshd restart

  Redirecting to /bin/systemctl restart  sshd.service

  [root@promote .ssh]# ps -ef|grep sshd

  root      1995     1  0 22:33 ?        00:00:00 sshd: hadoop [priv]

  hadoop    2009  1995  0 22:33 ?        00:00:00 sshd: hadoop@pts/0

  root      4171     1  0 23:11 ?        00:00:00 /usr/sbin/sshd -D

  root      4175  3397  0 23:12 pts/0    00:00:00 grep --color=auto sshd

  然後切換回hadoop用戶,將ssh證書公鑰拷貝至/home/hadoop/.ssh/authorized_keys文件中

  [root@promote .ssh]# su hadoop

  [hadoop@promote .ssh]$ cat id_rsa.pub 》 authorized_keys

  修改authorized_keys文件的權限為644(這步一定要有)

  [hadoop@promote .ssh]$ chmod 644 authorized_keys

  [hadoop@promote .ssh]$ ssh localhost

  The authenticity of host 'localhost (127.0.0.1)' can't be established.

  RSA key fingerprint is 25:1f:be:72:7b:83:8e:c7:96:b6:71:35:fc:5d:2e:7d.

  Are you sure you want to continue connecting (yes/no)? yes

  Warning: Permanently added 'localhost' (RSA) to the list of known hosts.

  Last login: Thu Feb 13 23:42:43 2014

  第一次登陸將會將證書內容保存在/home/hadoop/.ssh/known_hosts文件中,以後再次登陸將不需要輸入密碼

  [hadoop@promote .ssh]$ ssh localhost

  Last login: Thu Feb 13 23:46:04 2014 from localhost.localdomain

  至此ssh證書部分配置完成

  第二步:安裝JDK

  [hadoop@promote ~]$ java -version

  java version "1.7.0_25"

  OpenJDK Runtime Environment (fedora-2.3.10.3.fc19-i386)

  OpenJDK Client VM (build 23.7-b01, mixed mode)

  將OpenJDK換為Oracle的Java SE

  [hadoop@promote .ssh]$ cd ~

  [hadoop@promote ~]$ uname -i

  i386

  在Oracle的官網下載jdk-6u45-linux-i586.bin後上傳至服務器,賦予權限並進行安裝,最後刪除安裝包

  [hadoop@promote ~]$ chmod u+x jdk-6u45-linux-i586.bin

  [hadoop@promote ~]$ ./jdk-6u45-linux-i586.bin

  [hadoop@promote ~]$ rm -rf jdk-6u45-linux-i586.bin

  [hadoop@promote conf]$ export PATH=$PATH:/home/hadoop/jdk1.6.0_45/bin

  出現以下結果說明JDK成功安裝:

  [hadoop@promote ~]$ java -version

  java version "1.6.0_45"

  Java(TM) SE Runtime Environment (build 1.6.0_45-b06)

  Java HotSpot(TM) Client VM (build 20.45-b01, mixed mode, sharing)

  第三步:安裝Hadoop

  在Hadoop官網下載hadoop-1.2.1.tar.gz並上傳至服務器/home/hadoop路徑下

  [hadoop@promote ~]$ tar -xzf hadoop-1.2.1.tar.gz

  [hadoop@promote ~]$ rm -rf hadoop-1.2.1.tar.gz

  [hadoop@promote ~]$ cd hadoop-1.2.1/conf/

  [hadoop@promote conf]$ vi hadoop-env.sh

  將JAVA_HOME指向第二步安裝的JDK所在目錄片

  # The java implementation to use.  Required.

  export JAVA_HOME=/home/hadoop/jdk1.6.0_45

  保存並退出


第四步:修改Hadoop配置文件

  修改core-site.xml:

  <?xml version="1.0"?>

  <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

  <configuration>

  <property>

  <name>fs.default.name</name>

  <value>hdfs://localhost:9000</value>

  </property>

  </configuration>

  修改mapred-site.xml:

  <?xml version="1.0"?>

  <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

  <configuration>

  <property>

  <name>mapred.job.tracker</name>

  <value>localhost:9001</value>

  </property>

  </configuration>

  修改hdfs-site.xml:

  <?xml version="1.0"?>

  <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

  <configuration>

  <property>

  <name>dfs.replication</name>

  <value>1</value>

  </property>

  </configuration>

  master中指定的SNN節點和slaves中指定的從節點位置均為本地

  [hadoop@promote conf]$ cat masters

  localhost

  [hadoop@promote conf]$ cat slaves

  localhost

  第五步:啟動Hadoop

Copyright © Windows教程網 All Rights Reserved