Difference between revisions of "Hadoop: Instal di Ubuntu 14.04"

From OnnoWiki
Jump to navigation Jump to search
(New page: Sumber: http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php Hadoop on Ubuntu 14.04 In this chapter, we'll install a single-node Hadoop cluster backe...)
 
 
(21 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
Sumber: http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php
 
Sumber: http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php
  
Hadoop on Ubuntu 14.04
 
  
In this chapter, we'll install a single-node Hadoop cluster backed by the Hadoop Distributed File System on Ubuntu.
+
==Spesifikasi Hardware & OS==
  
 +
Persiapan yang perlu di lakukan
  
 +
* Harddisk WD Red 6TB untuk ujicoba
 +
* PC Biasa 64bit
 +
* Install Ubuntu 14.04 64bit
  
  
Installing Java
+
 
 +
==Instalasi Java==
  
 
Hadoop framework is written in Java!!
 
Hadoop framework is written in Java!!
  
k@laptop:~$ cd ~
+
cd ~
 
+
sudo apt-get update
# Update the source list
+
sudo locale-gen id_ID.UTF-8
k@laptop:~$ sudo apt-get update
+
sudo apt-get install default-jdk
 
 
# The OpenJDK project is the default version of Java
 
# that is provided from a supported Ubuntu repository.
 
k@laptop:~$ sudo apt-get install default-jdk
 
 
 
k@laptop:~$ java -version
 
java version "1.7.0_65"
 
OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3-0ubuntu0.14.04.1)
 
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)
 
 
 
 
 
 
 
 
 
 
 
Adding a dedicated Hadoop user
 
 
 
k@laptop:~$ sudo addgroup hadoop
 
Adding group `hadoop' (GID 1002) ...
 
Done.
 
 
 
k@laptop:~$ sudo adduser --ingroup hadoop hduser
 
Adding user `hduser' ...
 
Adding new user `hduser' (1001) with group `hadoop' ...
 
Creating home directory `/home/hduser' ...
 
Copying files from `/etc/skel' ...
 
Enter new UNIX password:
 
Retype new UNIX password:
 
passwd: password updated successfully
 
Changing the user information for hduser
 
Enter the new value, or press ENTER for the default
 
Full Name []:
 
Room Number []:
 
Work Phone []:
 
Home Phone []:
 
Other []:
 
Is the information correct? [Y/n] Y
 
 
 
 
 
 
 
 
 
 
 
Installing SSH
 
 
 
ssh has two main components:
 
 
 
    ssh : The command we use to connect to remote machines - the client.
 
    sshd : The daemon that is running on the server and allows clients to connect to the server.
 
 
 
The ssh is pre-enabled on Linux, but in order to start sshd daemon, we need to install ssh first. Use this command to do that :
 
 
 
k@laptop:~$ sudo apt-get install ssh
 
 
 
This will install ssh on our machine. If we get something similar to the following, we can think it is setup properly:
 
 
 
k@laptop:~$ which ssh
 
/usr/bin/ssh
 
  
k@laptop:~$ which sshd
+
cek versi
/usr/sbin/sshd
 
  
 +
java -version
  
 +
java version "1.7.0_51"
 +
OpenJDK Runtime Environment (IcedTea 2.4.6) (7u51-2.4.6-1ubuntu4)
 +
OpenJDK 64-Bit Server VM (build 24.51-b03, mixed mode)
  
 +
==Tambahkan Hadoop user==
  
 +
sudo addgroup hadoop
 +
sudo adduser --ingroup hadoop hduser
 +
sudo adduser hduser sudo
  
Create and Setup SSH Certificates
+
Adding user `hduser' to group `sudo' ...
 +
Adding user hduser to group sudo
 +
Done.
  
Hadoop requires SSH access to manage its nodes, i.e. remote machines plus our local machine. For our single-node setup of Hadoop, we therefore need to configure SSH access to localhost.
+
Jangan lupa tambahkan hduser sebagai sudoer. Paling gampang
  
So, we need to have SSH up and running on our machine and configured it to allow SSH public key authentication.
+
sudo su
 +
vi /etc/sudoers
  
Hadoop uses SSH (to access its nodes) which would normally require the user to enter a password. However, this requirement can be eliminated by creating and setting up SSH certificates using the following commands. If asked for a filename just leave it blank and press the enter key to continue.
+
Tambahkan
  
k@laptop:~$ su hduser
+
hduser ALL=(ALL:ALL) ALL
Password:  
 
k@laptop:~$ ssh-keygen -t rsa -P ""
 
Generating public/private rsa key pair.
 
Enter file in which to save the key (/home/hduser/.ssh/id_rsa):
 
Created directory '/home/hduser/.ssh'.
 
Your identification has been saved in /home/hduser/.ssh/id_rsa.
 
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
 
The key fingerprint is:
 
50:6b:f3:fc:0f:32:bf:30:79:c2:41:71:26:cc:7d:e3 hduser@laptop
 
The key's randomart image is:
 
+--[ RSA 2048]----+
 
|        .oo.o    |
 
|      . .o=. o  |
 
|      . + .  o . |
 
|      o =    E  |
 
|        S +      |
 
|        . +    |
 
|          O +    |
 
|          O o  |
 
|            o..  |
 
+-----------------+
 
  
 +
==Install ssh==
  
hduser@laptop:/home/k$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
+
Instalasi ssh
  
The second command adds the newly created key to the list of authorized keys so that Hadoop can use ssh without prompting for a password.
+
sudo apt-get install ssh
  
We can check if ssh works:
+
Cek ssh
  
hduser@laptop:/home/k$ ssh localhost
+
which ssh
The authenticity of host 'localhost (127.0.0.1)' can't be established.
 
ECDSA key fingerprint is e1:8b:a0:a5:75:ef:f4:b4:5e:a9:ed:be:64:be:5c:2f.
 
Are you sure you want to continue connecting (yes/no)? yes
 
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
 
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-40-generic x86_64)
 
...
 
  
 +
/usr/bin/ssh
  
 +
Cek sshd
  
 +
which sshd
  
 +
/usr/sbin/sshd
  
Install Hadoop
 
  
hduser@laptop:~$ wget http://mirrors.sonic.net/apache/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
+
==Buat & Setup Sertifikat SSH==
hduser@laptop:~$ tar xvzf hadoop-2.6.0.tar.gz
 
  
We want to move the Hadoop installation to the /usr/local/hadoop directory using the following command:
+
Hadoop membutuhkan akses SSH untuk memanage nodes (mesin remote & lokal). Kita akan konfigurasi agar mengijinkan authentikasi menggunakan SSH public key.
  
hduser@laptop:~/hadoop-2.6.0$ sudo mv * /usr/local/hadoop
+
su hduser
[sudo] password for hduser:
+
  ssh-keygen -t rsa -P ""
hduser is not in the sudoers file. This incident will be reported.
 
  
Oops!... We got:
 
  
"hduser is not in the sudoers file. This incident will be reported."
+
Generating public/private rsa key pair.
 +
Enter file in which to save the key (/home/hduser/.ssh/id_rsa):
 +
Created directory '/home/hduser/.ssh'.
 +
Your identification has been saved in /home/hduser/.ssh/id_rsa.
 +
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
 +
The key fingerprint is:
 +
5c:4e:51:87:9f:00:64:a9:42:40:28:f1:b7:39:c5:04 hduser@wdred
 +
The key's randomart image is:
 +
+--[ RSA 2048]----+
 +
|.. oEo.  .=+...  |
 +
|...  o.  ...o.  |
 +
| .. ..o  .o  o . |
 +
|  . +...+    o  |
 +
|    +  .S .     |
 +
|    .           |
 +
|                |
 +
|                |
 +
|                |
 +
+-----------------+
  
This error can be resolved by logging in as a root user, and then add hduser to sudo:
 
  
hduser@laptop:~/hadoop-2.6.0$ su k
+
Lakukan
Password:
 
  
k@laptop:/home/hduser$ sudo adduser hduser sudo
+
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
[sudo] password for k:
 
Adding user `hduser' to group `sudo' ...
 
Adding user hduser to group sudo
 
Done.
 
  
Now, the hduser has root priviledge, we can move the Hadoop installation to the /usr/local/hadoop directory without any problem:
+
agar Hadoop menambahkan public key yang baru kita buat agar masuk ke key yang di authorisasi agar kita tidak perlu pakai password untuk ssh.
  
k@laptop:/home/hduser$ sudo su hduser
+
Cek ke localhost
  
hduser@laptop:~/hadoop-2.6.0$ sudo mv * /usr/local/hadoop
+
ssh localhost
hduser@laptop:~/hadoop-2.6.0$ sudo chown -R hduser:hadoop /usr/local/hadoop
 
  
  
 +
The authenticity of host 'localhost (127.0.0.1)' can't be established.
 +
ECDSA key fingerprint is b4:47:39:22:4a:c1:fe:0a:af:28:a6:c2:9b:2f:4d:57.
 +
Are you sure you want to continue connecting (yes/no)? yes
 +
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
 +
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-24-generic x86_64)
 +
..
 +
..
  
  
 +
==Install Hadoop==
  
 +
Sebagai user biasa, lakukan:
  
 +
cd ~
 +
wget http://mirrors.sonic.net/apache/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz
 +
tar zxvf hadoop-2.7.1.tar.gz
  
Setup Configuration Files
+
Proses download lumayan lama, karena file binary yang kita download sekitar 200+ Mbyte :(
  
The following files will have to be modified to complete the Hadoop setup:
+
sudo su
 +
cd /home/hduser/
 +
mv hadoop-2.7.1 /usr/local/hadoop
 +
chown -R hduser:hadoop /usr/local/hadoop
  
    ~/.bashrc
+
==Setup File Konfigurasi==
    /usr/local/hadoop/etc/hadoop/hadoop-env.sh
 
    /usr/local/hadoop/etc/hadoop/core-site.xml
 
    /usr/local/hadoop/etc/hadoop/mapred-site.xml.template
 
    /usr/local/hadoop/etc/hadoop/hdfs-site.xml
 
  
1. ~/.bashrc:
 
  
Before editing the .bashrc file in our home directory, we need to find the path where Java has been installed to set the JAVA_HOME environment variable using the following command:
+
File berikut perlu di modifikasi untuk menyelesaikan Setup Hadoop:
  
hduser@laptop update-alternatives --config java
+
~/.bashrc
There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java
+
/usr/local/hadoop/etc/hadoop/hadoop-env.sh
Nothing to configure.
+
/usr/local/hadoop/etc/hadoop/core-site.xml
 +
/usr/local/hadoop/etc/hadoop/mapred-site.xml.template
 +
/usr/local/hadoop/etc/hadoop/hdfs-site.xml
  
Now we can append the following to the end of ~/.bashrc:
+
===~/.bashrc:===
  
hduser@laptop:~$ vi ~/.bashrc
+
Sebelum edit file .bashrc , kita perlu tahu path dimana Java di insyall dan perlu memasukan ke variabel JAVA_HOME environment:
  
#HADOOP VARIABLES START
+
update-alternatives --config java
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
 
export HADOOP_INSTALL=/usr/local/hadoop
 
export PATH=$PATH:$HADOOP_INSTALL/bin
 
export PATH=$PATH:$HADOOP_INSTALL/sbin
 
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
 
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
 
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
 
export YARN_HOME=$HADOOP_INSTALL
 
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
 
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
 
#HADOOP VARIABLES END
 
  
hduser@laptop:~$ source ~/.bashrc
+
There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java
 +
Nothing to configure.
  
note that the JAVA_HOME should be set as the path just before the '.../bin/':
+
Selanjutnya tambahkan di akhir ~/.bashrc:
  
hduser@ubuntu-VirtualBox:~$ javac -version
+
vi ~/.bashrc
javac 1.7.0_75
 
  
hduser@ubuntu-VirtualBox:~$ which javac
+
#HADOOP VARIABLES START
/usr/bin/javac
+
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/
 +
export HADOOP_INSTALL=/usr/local/hadoop
 +
export HADOOP_HOME=/usr/local/hadoop/share/hadoop/common
 +
export HADOOP_VERSION=2.7.1
 +
export PATH=$PATH:$HADOOP_INSTALL/bin
 +
export PATH=$PATH:$HADOOP_INSTALL/sbin
 +
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
 +
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
 +
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
 +
export YARN_HOME=$HADOOP_INSTALL
 +
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
 +
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
 +
#HADOOP VARIABLES END
  
hduser@ubuntu-VirtualBox:~$ readlink -f /usr/bin/javac
+
Aktifkan
/usr/lib/jvm/java-7-openjdk-amd64/bin/javac
 
  
 +
source ~/.bashrc
  
 +
Perhatikan JAVA_HOME harus merupakan path sebelum '.../bin/' dibawah ini
  
2. /usr/local/hadoop/etc/hadoop/hadoop-env.sh
+
javac -version
 +
javac 1.7.0_51
  
We need to set JAVA_HOME by modifying hadoop-env.sh file.
+
which javac
 +
/usr/bin/javac
  
hduser@laptop:~$ vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
+
readlink -f /usr/bin/javac
 +
/usr/lib/jvm/java-7-openjdk-amd64/bin/javac
  
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
+
Maka JAVA_HOME adalah
 +
 +
/usr/lib/jvm/java-7-openjdk-amd64/
  
Adding the above statement in the hadoop-env.sh file ensures that the value of JAVA_HOME variable will be available to Hadoop whenever it is started up.
+
===/usr/local/hadoop/etc/hadoop/hadoop-env.sh===
  
 +
set JAVA_HOME di file hadoop-env.sh
  
3. /usr/local/hadoop/etc/hadoop/core-site.xml:
+
vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
  
The /usr/local/hadoop/etc/hadoop/core-site.xml file contains configuration properties that Hadoop uses when starting up.
+
# The java implementation to use.
This file can be used to override the default settings that Hadoop starts with.
+
# export JAVA_HOME=${JAVA_HOME}
 +
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
  
hduser@laptop:~$ sudo mkdir -p /app/hadoop/tmp
 
hduser@laptop:~$ sudo chown hduser:hadoop /app/hadoop/tmp
 
  
Open the file and enter the following in between the <configuration></configuration> tag:
+
===/usr/local/hadoop/etc/hadoop/core-site.xml===
  
hduser@laptop:~$ vi /usr/local/hadoop/etc/hadoop/core-site.xml
+
File /usr/local/hadoop/etc/hadoop/core-site.xml berisi konfigurasi properties yang digunakan Hadoop saat mulai beroperasi. File ini dapat digunakan untuk meng-override setting default yang ada.
  
<configuration>
+
  sudo mkdir -p /app/hadoop/tmp
  <property>
+
sudo chown hduser:hadoop /app/hadoop/tmp
  <name>hadoop.tmp.dir</name>
 
  <value>/app/hadoop/tmp</value>
 
  <description>A base for other temporary directories.</description>
 
</property>
 
  
<property>
+
Masuk ini antara tag <configuration></configuration>:
  <name>fs.default.name</name>
 
  <value>hdfs://localhost:54310</value>
 
  <description>The name of the default file system.  A URI whose
 
  scheme and authority determine the FileSystem implementation.  The
 
  uri's scheme determines the config property (fs.SCHEME.impl) naming
 
  the FileSystem implementation class.  The uri's authority is used to
 
  determine the host, port, etc. for a filesystem.</description>
 
</property>
 
</configuration>
 
  
 +
vi /usr/local/hadoop/etc/hadoop/core-site.xml
  
 +
<configuration>
 +
  <property>
 +
  <name>hadoop.tmp.dir</name>
 +
  <value>/app/hadoop/tmp</value>
 +
  <description>A base for other temporary directories.</description>
 +
  </property>
 +
 +
  <property>
 +
  <name>fs.default.name</name>
 +
  <value>hdfs://localhost:54310</value>
 +
  <description>The name of the default file system.  A URI whose
 +
  scheme and authority determine the FileSystem implementation.  The
 +
  uri's scheme determines the config property (fs.SCHEME.impl) naming
 +
  the FileSystem implementation class.  The uri's authority is used to
 +
  determine the host, port, etc. for a filesystem.</description>
 +
  </property>
 +
</configuration>
  
4. /usr/local/hadoop/etc/hadoop/mapred-site.xml
 
  
By default, the /usr/local/hadoop/etc/hadoop/ folder contains
 
/usr/local/hadoop/etc/hadoop/mapred-site.xml.template
 
file which has to be renamed/copied with the name mapred-site.xml:
 
  
hduser@laptop:~$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml
+
===/usr/local/hadoop/etc/hadoop/mapred-site.xml===
  
The mapred-site.xml file is used to specify which framework is being used for MapReduce.
+
Default, folder /usr/local/hadoop/etc/hadoop/ berisi
We need to enter the following content in between the <configuration></configuration> tag:
 
  
<configuration>
+
  /usr/local/hadoop/etc/hadoop/mapred-site.xml.template
  <property>
 
  <name>mapred.job.tracker</name>
 
  <value>localhost:54311</value>
 
  <description>The host and port that the MapReduce job tracker runs
 
  at.  If "local", then jobs are run in-process as a single map
 
  and reduce task.
 
  </description>
 
</property>
 
</configuration>
 
  
 +
yang harus di rename / di copy dengan nama mapred-site.xml:
  
 +
cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml
  
 +
File mapred-site.xml digunakan untuk menentukan framework yang digunakan untuk MapReduce. Kita perlu memasukan informasi berikut antara tag <configuration></configuration> :
  
5. /usr/local/hadoop/etc/hadoop/hdfs-site.xml
+
vi /usr/local/hadoop/etc/hadoop/mapred-site.xml
  
The /usr/local/hadoop/etc/hadoop/hdfs-site.xml file needs to be configured for each host in the cluster that is being used.
+
<configuration>
It is used to specify the directories which will be used as the namenode and the datanode on that host.
+
  <property>
 +
  <name>mapred.job.tracker</name>
 +
  <value>localhost:54311</value>
 +
  <description>The host and port that the MapReduce job tracker runs
 +
  at. If "local", then jobs are run in-process as a single map
 +
  and reduce task.
 +
  </description>
 +
  </property>
 +
</configuration>
  
Before editing this file, we need to create two directories which will contain the namenode and the datanode for this Hadoop installation.
+
===/usr/local/hadoop/etc/hadoop/hdfs-site.xml===
This can be done using the following commands:
 
  
hduser@laptop:~$ sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode
+
File /usr/local/hadoop/etc/hadoop/hdfs-site.xml perlu di konfigurasi di setiap host di cluster yang kita gunakan. File tersebut berisi directory yang digunakan sebagai node dan datanode di host tersebut.
hduser@laptop:~$ sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode
 
hduser@laptop:~$ sudo chown -R hduser:hadoop /usr/local/hadoop_store
 
  
Open the file and enter the following content in between the <configuration></configuration> tag:
+
Sebelum mengedit file, kita perlu membuat dua directory yang berisi namenode dan datanode untuk instalasi Hadoop ini. Ini dapat dilakukan menggunakan perintah:
  
hduser@laptop:~$ vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml
+
sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode
 +
sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode
 +
sudo chown -R hduser:hadoop /usr/local/hadoop_store
  
<configuration>
+
Buka file, dam masukan konfigurasi antara tag <configuration></configuration>:
<property>
 
  <name>dfs.replication</name>
 
  <value>1</value>
 
  <description>Default block replication.
 
  The actual number of replications can be specified when the file is created.
 
  The default is used if replication is not specified in create time.
 
  </description>
 
</property>
 
<property>
 
  <name>dfs.namenode.name.dir</name>
 
  <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
 
</property>
 
<property>
 
  <name>dfs.datanode.data.dir</name>
 
  <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
 
</property>
 
</configuration>
 
  
 +
vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml
  
 +
<configuration>
 +
  <property>
 +
  <name>dfs.replication</name>
 +
  <value>1</value>
 +
  <description>Default block replication.
 +
  The actual number of replications can be specified when the file is created.
 +
  The default is used if replication is not specified in create time.
 +
  </description>
 +
  </property>
 +
  <property>
 +
    <name>dfs.namenode.name.dir</name>
 +
    <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
 +
  </property>
 +
  <property>
 +
    <name>dfs.datanode.data.dir</name>
 +
    <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
 +
  </property>
 +
</configuration>
  
  
  
 +
==Format Hadoop Filesystem==
  
 +
Hadoop file sistem perlu di format sebelum kita dapat menggunakannya.
 +
Perintah format perlu memperoleh ijin write di folder /usr/local/hadoop_store/hdfs/name-node.
  
Format the New Hadoop Filesystem
+
hadoop namenode -format
  
Now, the Hadoop file system needs to be formatted so that we can start to use it. The format command should be issued with write permission since it creates current directory
 
under /usr/local/hadoop_store/hdfs/namenode folder:
 
  
hduser@laptop:~$ hadoop namenode -format
+
DEPRECATED: Use of this script to execute hdfs command is deprecated.
DEPRECATED: Use of this script to execute hdfs command is deprecated.
+
Instead use the hdfs command for it.
Instead use the hdfs command for it.
+
 +
15/11/09 11:27:08 INFO namenode.NameNode: STARTUP_MSG:
 +
/************************************************************
 +
STARTUP_MSG: Starting NameNode
 +
STARTUP_MSG:  host = wdred/192.168.0.19
 +
STARTUP_MSG:  args = [-format]
 +
STARTUP_MSG:  version = 2.7.1
 +
STARTUP_MSG:  classpath =
 +
..
 +
..
 +
..
 +
..
 +
..
 +
15/11/09 11:27:11 INFO namenode.FSNamesystem: isPermissionEnabled = true
 +
15/11/09 11:27:11 INFO namenode.FSNamesystem: HA Enabled: false
 +
15/11/09 11:27:11 INFO namenode.FSNamesystem: Append Enabled: true
 +
15/11/09 11:27:11 INFO util.GSet: Computing capacity for map INodeMap
 +
15/11/09 11:27:11 INFO util.GSet: VM type      = 64-bit
 +
15/11/09 11:27:11 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
 +
15/11/09 11:27:11 INFO util.GSet: capacity      = 2^20 = 1048576 entries
 +
15/11/09 11:27:11 INFO namenode.FSDirectory: ACLs enabled? false
 +
15/11/09 11:27:11 INFO namenode.FSDirectory: XAttrs enabled? true
 +
15/11/09 11:27:11 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
 +
15/11/09 11:27:11 INFO namenode.NameNode: Caching file names occuring more than 10 times
 +
15/11/09 11:27:11 INFO util.GSet: Computing capacity for map cachedBlocks
 +
15/11/09 11:27:11 INFO util.GSet: VM type      = 64-bit
 +
15/11/09 11:27:11 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
 +
15/11/09 11:27:11 INFO util.GSet: capacity      = 2^18 = 262144 entries
 +
15/11/09 11:27:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
 +
15/11/09 11:27:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
 +
15/11/09 11:27:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension    = 30000
 +
15/11/09 11:27:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
 +
15/11/09 11:27:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
 +
15/11/09 11:27:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
 +
15/11/09 11:27:11 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
 +
15/11/09 11:27:11 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and  retry cache entry expiry time is 600000 millis
 +
15/11/09 11:27:11 INFO util.GSet: Computing capacity for map NameNodeRetryCache
 +
15/11/09 11:27:11 INFO util.GSet: VM type      = 64-bit
 +
15/11/09 11:27:11 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
 +
15/11/09 11:27:11 INFO util.GSet: capacity      = 2^15 = 32768 entries
 +
15/11/09 11:27:12 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1834961819-192.168.0.19-1447043232009
 +
15/11/09 11:27:12 INFO common.Storage: Storage directory /usr/local/hadoop_store/hdfs/namenode has been successfully formatted.
 +
15/11/09 11:27:12 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
 +
15/11/09 11:27:12 INFO util.ExitUtil: Exiting with status 0
 +
15/11/09 11:27:12 INFO namenode.NameNode: SHUTDOWN_MSG:
 +
/************************************************************
 +
SHUTDOWN_MSG: Shutting down NameNode at wdred/192.168.0.19
 +
************************************************************/
  
15/04/18 14:43:03 INFO namenode.NameNode: STARTUP_MSG:
+
Catatan perintah hadoop namenode -format harus di jalankan SEBELUM kita mengjalankan Hadoop. Jika perintah ini dijalankan sesudah Hadoop di jalankan, ini akan menghancurkan data yang ada di Hadoop file system.
/************************************************************
 
STARTUP_MSG: Starting NameNode
 
STARTUP_MSG:  host = laptop/192.168.1.1
 
STARTUP_MSG:  args = [-format]
 
STARTUP_MSG:  version = 2.6.0
 
STARTUP_MSG:  classpath = /usr/local/hadoop/etc/hadoop
 
...
 
STARTUP_MSG:  java = 1.7.0_65
 
************************************************************/
 
15/04/18 14:43:03 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
 
15/04/18 14:43:03 INFO namenode.NameNode: createNameNode [-format]
 
15/04/18 14:43:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
 
Formatting using clusterid: CID-e2f515ac-33da-45bc-8466-5b1100a2bf7f
 
15/04/18 14:43:09 INFO namenode.FSNamesystem: No KeyProvider found.
 
15/04/18 14:43:09 INFO namenode.FSNamesystem: fsLock is fair:true
 
15/04/18 14:43:10 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
 
15/04/18 14:43:10 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
 
15/04/18 14:43:10 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
 
15/04/18 14:43:10 INFO blockmanagement.BlockManager: The block deletion will start around 2015 Apr 18 14:43:10
 
15/04/18 14:43:10 INFO util.GSet: Computing capacity for map BlocksMap
 
15/04/18 14:43:10 INFO util.GSet: VM type      = 64-bit
 
15/04/18 14:43:10 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
 
15/04/18 14:43:10 INFO util.GSet: capacity      = 2^21 = 2097152 entries
 
15/04/18 14:43:10 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
 
15/04/18 14:43:10 INFO blockmanagement.BlockManager: defaultReplication        = 1
 
15/04/18 14:43:10 INFO blockmanagement.BlockManager: maxReplication            = 512
 
15/04/18 14:43:10 INFO blockmanagement.BlockManager: minReplication            = 1
 
15/04/18 14:43:10 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
 
15/04/18 14:43:10 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
 
15/04/18 14:43:10 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
 
15/04/18 14:43:10 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
 
15/04/18 14:43:10 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
 
15/04/18 14:43:10 INFO namenode.FSNamesystem: fsOwner            = hduser (auth:SIMPLE)
 
15/04/18 14:43:10 INFO namenode.FSNamesystem: supergroup          = supergroup
 
15/04/18 14:43:10 INFO namenode.FSNamesystem: isPermissionEnabled = true
 
15/04/18 14:43:10 INFO namenode.FSNamesystem: HA Enabled: false
 
15/04/18 14:43:10 INFO namenode.FSNamesystem: Append Enabled: true
 
15/04/18 14:43:11 INFO util.GSet: Computing capacity for map INodeMap
 
15/04/18 14:43:11 INFO util.GSet: VM type      = 64-bit
 
15/04/18 14:43:11 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
 
15/04/18 14:43:11 INFO util.GSet: capacity      = 2^20 = 1048576 entries
 
15/04/18 14:43:11 INFO namenode.NameNode: Caching file names occuring more than 10 times
 
15/04/18 14:43:11 INFO util.GSet: Computing capacity for map cachedBlocks
 
15/04/18 14:43:11 INFO util.GSet: VM type      = 64-bit
 
15/04/18 14:43:11 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
 
15/04/18 14:43:11 INFO util.GSet: capacity      = 2^18 = 262144 entries
 
15/04/18 14:43:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
 
15/04/18 14:43:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
 
15/04/18 14:43:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension    = 30000
 
15/04/18 14:43:11 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
 
15/04/18 14:43:11 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
 
15/04/18 14:43:11 INFO util.GSet: Computing capacity for map NameNodeRetryCache
 
15/04/18 14:43:11 INFO util.GSet: VM type      = 64-bit
 
15/04/18 14:43:11 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
 
15/04/18 14:43:11 INFO util.GSet: capacity      = 2^15 = 32768 entries
 
15/04/18 14:43:11 INFO namenode.NNConf: ACLs enabled? false
 
15/04/18 14:43:11 INFO namenode.NNConf: XAttrs enabled? true
 
15/04/18 14:43:11 INFO namenode.NNConf: Maximum size of an xattr: 16384
 
15/04/18 14:43:12 INFO namenode.FSImage: Allocated new BlockPoolId: BP-130729900-192.168.1.1-1429393391595
 
15/04/18 14:43:12 INFO common.Storage: Storage directory /usr/local/hadoop_store/hdfs/namenode has been successfully formatted.
 
15/04/18 14:43:12 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
 
15/04/18 14:43:12 INFO util.ExitUtil: Exiting with status 0
 
15/04/18 14:43:12 INFO namenode.NameNode: SHUTDOWN_MSG:
 
/************************************************************
 
SHUTDOWN_MSG: Shutting down NameNode at laptop/192.168.1.1
 
************************************************************/
 
  
  
Note that hadoop namenode -format command should be executed once before we start using Hadoop.
+
==Menjalankan Hadoop==
If this command is executed again after Hadoop has been used, it'll destroy all the data on the Hadoop file system.
 
  
 +
Sekarang kita dapat mulai menjalankan single node cluster yang baru di instalasi.
 +
Kita dapat menggunakan
  
 +
start-all.sh
 +
start-dfs.sh
 +
start-yarn.sh
  
 +
Lakukan
  
Starting Hadoop
+
cd /usr/local/hadoop/sbin
 +
ls
  
Now it's time to start the newly installed single node cluster.
+
distribute-exclude.sh    start-all.cmd        stop-balancer.sh
We can use start-all.sh or (start-dfs.sh and start-yarn.sh)
+
hadoop-daemon.sh        start-all.sh         stop-dfs.cmd
 +
hadoop-daemons.sh        start-balancer.sh    stop-dfs.sh
 +
hdfs-config.cmd          start-dfs.cmd        stop-secure-dns.sh
 +
hdfs-config.sh          start-dfs.sh        stop-yarn.cmd
 +
httpfs.sh                start-secure-dns.sh  stop-yarn.sh
 +
kms.sh                  start-yarn.cmd      yarn-daemon.sh
 +
mr-jobhistory-daemon.sh  start-yarn.sh        yarn-daemons.sh
 +
refresh-namenodes.sh    stop-all.cmd
 +
slaves.sh                stop-all.sh
  
k@laptop:~$ cd /usr/local/hadoop/sbin
 
  
k@laptop:/usr/local/hadoop/sbin$ ls
 
distribute-exclude.sh    start-all.cmd        stop-balancer.sh
 
hadoop-daemon.sh        start-all.sh        stop-dfs.cmd
 
hadoop-daemons.sh        start-balancer.sh    stop-dfs.sh
 
hdfs-config.cmd          start-dfs.cmd        stop-secure-dns.sh
 
hdfs-config.sh          start-dfs.sh        stop-yarn.cmd
 
httpfs.sh                start-secure-dns.sh  stop-yarn.sh
 
kms.sh                  start-yarn.cmd      yarn-daemon.sh
 
mr-jobhistory-daemon.sh  start-yarn.sh        yarn-daemons.sh
 
refresh-namenodes.sh    stop-all.cmd
 
slaves.sh                stop-all.sh
 
  
k@laptop:/usr/local/hadoop/sbin$ sudo su hduser
 
  
hduser@laptop:/usr/local/hadoop/sbin$ start-all.sh
+
sudo su hduser
hduser@laptop:~$ start-all.sh
+
start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
 
15/04/18 16:43:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
 
Starting namenodes on [localhost]
 
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-laptop.out
 
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-laptop.out
 
Starting secondary namenodes [0.0.0.0]
 
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-laptop.out
 
15/04/18 16:43:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
 
starting yarn daemons
 
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-laptop.out
 
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-laptop.out
 
  
We can check if it's really up and running:
+
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
 +
15/11/09 11:32:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
 +
Starting namenodes on [localhost]
 +
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-wdred.out
 +
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-wdred.out
 +
Starting secondary namenodes [0.0.0.0]
 +
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
 +
ECDSA key fingerprint is b4:47:39:22:4a:c1:fe:0a:af:28:a6:c2:9b:2f:4d:57.
 +
Are you sure you want to continue connecting (yes/no)? yes
 +
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
 +
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-wdred.out
 +
15/11/09 11:33:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
 +
starting yarn daemons
 +
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-wdred.out
 +
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-wdred.out
  
hduser@laptop:/usr/local/hadoop/sbin$ jps
 
9026 NodeManager
 
7348 NameNode
 
9766 Jps
 
8887 ResourceManager
 
7507 DataNode
 
  
The output means that we now have a functional instance of Hadoop running on our VPS (Virtual private server).
+
Cek, apakah berjalan dengan baik:
  
Another way to check is using netstat:
+
jps
  
hduser@laptop:~$ netstat -plten | grep java
+
7245 NameNode
(Not all processes could be identified, non-owned process info
+
7380 DataNode
  will not be shown, you would have to be root to see it all.)
+
  8193 Jps
tcp        0      0 0.0.0.0:50020          0.0.0.0:*              LISTEN      1001      1843372    10605/java     
+
7895 NodeManager
tcp        0      0 127.0.0.1:54310        0.0.0.0:*              LISTEN      1001      1841277    10447/java     
+
7607 SecondaryNameNode
tcp        0      0 0.0.0.0:50090          0.0.0.0:*              LISTEN      1001      1841130    10895/java     
+
7758 ResourceManager
tcp        0      0 0.0.0.0:50070          0.0.0.0:*              LISTEN      1001      1840196    10447/java     
 
tcp        0      0 0.0.0.0:50010          0.0.0.0:*              LISTEN      1001      1841320    10605/java     
 
tcp        0      0 0.0.0.0:50075          0.0.0.0:*              LISTEN      1001      1841646    10605/java     
 
tcp6      0      0 :::8040                :::*                    LISTEN      1001      1845543    11383/java     
 
tcp6      0      0 :::8042                :::*                    LISTEN      1001      1845551    11383/java     
 
tcp6      0      0 :::8088                :::*                    LISTEN      1001      1842110    11252/java     
 
tcp6      0      0 :::49630                :::*                    LISTEN      1001      1845534    11383/java     
 
tcp6      0      0 :::8030                :::*                    LISTEN      1001      1842036    11252/java     
 
tcp6      0      0 :::8031                :::*                    LISTEN      1001      1842005    11252/java     
 
tcp6      0      0 :::8032                :::*                    LISTEN      1001      1842100    11252/java     
 
tcp6      0      0 :::8033                :::*                    LISTEN      1001      1842162    11252/java     
 
  
 +
Cek
  
 +
netstat -plten | grep java
  
 +
(Not all processes could be identified, non-owned process info
 +
  will not be shown, you would have to be root to see it all.)
 +
tcp        0      0 0.0.0.0:50070          0.0.0.0:*              LISTEN      1001      16893      7245/java     
 +
tcp        0      0 0.0.0.0:8088            0.0.0.0:*              LISTEN      1001      20864      7758/java     
 +
tcp        0      0 127.0.0.1:54680        0.0.0.0:*              LISTEN      1001      18512      7380/java     
 +
tcp        0      0 0.0.0.0:50010          0.0.0.0:*              LISTEN      1001      18506      7380/java     
 +
tcp        0      0 0.0.0.0:50075          0.0.0.0:*              LISTEN      1001      18728      7380/java     
 +
tcp        0      0 0.0.0.0:8030            0.0.0.0:*              LISTEN      1001      20855      7758/java     
 +
tcp        0      0 0.0.0.0:8031            0.0.0.0:*              LISTEN      1001      20848      7758/java     
 +
tcp        0      0 0.0.0.0:8032            0.0.0.0:*              LISTEN      1001      20860      7758/java     
 +
tcp        0      0 0.0.0.0:8033            0.0.0.0:*              LISTEN      1001      21067      7758/java     
 +
tcp        0      0 0.0.0.0:52739          0.0.0.0:*              LISTEN      1001      21059      7895/java     
 +
tcp        0      0 0.0.0.0:50020          0.0.0.0:*              LISTEN      1001      18150      7380/java     
 +
tcp        0      0 127.0.0.1:54310        0.0.0.0:*              LISTEN      1001      17639      7245/java     
 +
tcp        0      0 0.0.0.0:8040            0.0.0.0:*              LISTEN      1001      22344      7895/java     
 +
tcp        0      0 0.0.0.0:8042            0.0.0.0:*              LISTEN      1001      22348      7895/java     
 +
tcp        0      0 0.0.0.0:50090          0.0.0.0:*              LISTEN      1001      19321      7607/java
  
 +
==Monitor Job & Task==
  
Stopping Hadoop
 
  
$ pwd
+
* NameNode daemon: http://hdnode01:50070
/usr/local/hadoop/sbin
+
* JobTracker daemon: http://hdnode01:50030
 +
* TaskTracker daemon: http://hdnode01:50060
  
$ ls
 
distribute-exclude.sh  httpfs.sh                start-all.sh        start-yarn.cmd    stop-dfs.cmd        yarn-daemon.sh
 
hadoop-daemon.sh      mr-jobhistory-daemon.sh  start-balancer.sh    start-yarn.sh    stop-dfs.sh        yarn-daemons.sh
 
hadoop-daemons.sh      refresh-namenodes.sh    start-dfs.cmd        stop-all.cmd      stop-secure-dns.sh
 
hdfs-config.cmd        slaves.sh                start-dfs.sh        stop-all.sh      stop-yarn.cmd
 
hdfs-config.sh        start-all.cmd            start-secure-dns.sh  stop-balancer.sh  stop-yarn.sh
 
  
We run stop-all.sh or (stop-dfs.sh and stop-yarn.sh) to stop all the daemons running on our machine:
+
==Stop Hadoop==
  
hduser@laptop:/usr/local/hadoop/sbin$ pwd
+
Kita dapat menjalankan
/usr/local/hadoop/sbin
 
hduser@laptop:/usr/local/hadoop/sbin$ ls
 
distribute-exclude.sh  httpfs.sh                start-all.cmd      start-secure-dns.sh  stop-balancer.sh    stop-yarn.sh
 
hadoop-daemon.sh      kms.sh                  start-all.sh      start-yarn.cmd      stop-dfs.cmd        yarn-daemon.sh
 
hadoop-daemons.sh      mr-jobhistory-daemon.sh  start-balancer.sh  start-yarn.sh        stop-dfs.sh        yarn-daemons.sh
 
hdfs-config.cmd        refresh-namenodes.sh    start-dfs.cmd      stop-all.cmd        stop-secure-dns.sh
 
hdfs-config.sh        slaves.sh                start-dfs.sh      stop-all.sh          stop-yarn.cmd
 
hduser@laptop:/usr/local/hadoop/sbin$
 
hduser@laptop:/usr/local/hadoop/sbin$ stop-all.sh
 
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
 
15/04/18 15:46:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
 
Stopping namenodes on [localhost]
 
localhost: stopping namenode
 
localhost: stopping datanode
 
Stopping secondary namenodes [0.0.0.0]
 
0.0.0.0: no secondarynamenode to stop
 
15/04/18 15:46:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
 
stopping yarn daemons
 
stopping resourcemanager
 
localhost: stopping nodemanager
 
no proxyserver to stop
 
  
 +
stop-all.sh
 +
stop-dfs.sh
 +
stop-yarn.sh
  
  
 +
Lakukan
  
 +
cd /usr/local/hadoop/sbin
 +
stop-all.sh
  
Hadoop Web Interfaces
+
==Hadoop Web Interface==
  
 
Let's start the Hadoop again and see its Web UI:
 
Let's start the Hadoop again and see its Web UI:
  
hduser@laptop:/usr/local/hadoop/sbin$ start-all.sh
+
cd /usr/local/hadoop/sbin
 
+
start-all.sh
 
 
http://localhost:50070/ - web UI of the NameNode daemon
 
Hadoop_50070.png
 
 
 
 
 
 
 
Hadoop_50070_2.png
 
 
 
 
 
 
 
Hadoop_50070_3.png
 
 
 
 
 
 
 
SecondaryNameNode
 
Hadoop_SecondaryNode.png
 
 
 
(Note) I had to restart Hadoop to get this Secondary Namenode.
 
 
 
 
 
 
 
 
 
DataNode
 
Hadoop_DataNode.png
 
 
 
 
 
 
 
Hadoop_Logs.png
 
 
 
 
 
 
 
 
 
 
 
Bogotobogo's contents
 
 
 
To see more items, click left or right arrow.
 
 
 
I hope this site is informative and helpful.
 
 
 
 
 
Using Hadoop
 
 
 
If we have an application that is set up to use Hadoop, we can fire that up and start using it with our Hadoop installation!
 
 
 
 
 
 
 
 
 
  
 +
Akses ke
  
 +
http://localhost:50070
  
 
==Referensi==
 
==Referensi==
  
 
* http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php
 
* http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php

Latest revision as of 08:10, 10 November 2015

Sumber: http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php


Spesifikasi Hardware & OS

Persiapan yang perlu di lakukan

  • Harddisk WD Red 6TB untuk ujicoba
  • PC Biasa 64bit
  • Install Ubuntu 14.04 64bit


Instalasi Java

Hadoop framework is written in Java!!

cd ~
sudo apt-get update
sudo locale-gen id_ID.UTF-8
sudo apt-get install default-jdk

cek versi

java -version
java version "1.7.0_51"
OpenJDK Runtime Environment (IcedTea 2.4.6) (7u51-2.4.6-1ubuntu4)
OpenJDK 64-Bit Server VM (build 24.51-b03, mixed mode)

Tambahkan Hadoop user

sudo addgroup hadoop
sudo adduser --ingroup hadoop hduser
sudo adduser hduser sudo
Adding user `hduser' to group `sudo' ...
Adding user hduser to group sudo
Done.

Jangan lupa tambahkan hduser sebagai sudoer. Paling gampang

sudo su
vi /etc/sudoers

Tambahkan

hduser  ALL=(ALL:ALL) ALL

Install ssh

Instalasi ssh

sudo apt-get install ssh

Cek ssh

which ssh
/usr/bin/ssh

Cek sshd

which sshd
/usr/sbin/sshd


Buat & Setup Sertifikat SSH

Hadoop membutuhkan akses SSH untuk memanage nodes (mesin remote & lokal). Kita akan konfigurasi agar mengijinkan authentikasi menggunakan SSH public key.

su hduser
ssh-keygen -t rsa -P ""


Generating public/private rsa key pair.
Enter file in which to save the key (/home/hduser/.ssh/id_rsa): 
Created directory '/home/hduser/.ssh'.
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
The key fingerprint is:
5c:4e:51:87:9f:00:64:a9:42:40:28:f1:b7:39:c5:04 hduser@wdred
The key's randomart image is:
+--[ RSA 2048]----+
|.. oEo.  .=+...  |
|...  o.  ...o.   |
| .. ..o  .o  o . |
|   . +...+    o  |
|    +  .S .      |
|     .           |
|                 |
|                 |
|                 |
+-----------------+


Lakukan

cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

agar Hadoop menambahkan public key yang baru kita buat agar masuk ke key yang di authorisasi agar kita tidak perlu pakai password untuk ssh.

Cek ke localhost

ssh localhost


The authenticity of host 'localhost (127.0.0.1)' can't be established.
ECDSA key fingerprint is b4:47:39:22:4a:c1:fe:0a:af:28:a6:c2:9b:2f:4d:57.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-24-generic x86_64)
..
..


Install Hadoop

Sebagai user biasa, lakukan:

cd ~
wget http://mirrors.sonic.net/apache/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz
tar zxvf hadoop-2.7.1.tar.gz

Proses download lumayan lama, karena file binary yang kita download sekitar 200+ Mbyte :(

sudo su
cd /home/hduser/
mv hadoop-2.7.1 /usr/local/hadoop
chown -R hduser:hadoop /usr/local/hadoop

Setup File Konfigurasi

File berikut perlu di modifikasi untuk menyelesaikan Setup Hadoop:

~/.bashrc
/usr/local/hadoop/etc/hadoop/hadoop-env.sh
/usr/local/hadoop/etc/hadoop/core-site.xml
/usr/local/hadoop/etc/hadoop/mapred-site.xml.template
/usr/local/hadoop/etc/hadoop/hdfs-site.xml

~/.bashrc:

Sebelum edit file .bashrc , kita perlu tahu path dimana Java di insyall dan perlu memasukan ke variabel JAVA_HOME environment:

update-alternatives --config java
There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java
Nothing to configure.

Selanjutnya tambahkan di akhir ~/.bashrc:

vi ~/.bashrc
#HADOOP VARIABLES START
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/
export HADOOP_INSTALL=/usr/local/hadoop
export HADOOP_HOME=/usr/local/hadoop/share/hadoop/common
export HADOOP_VERSION=2.7.1
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
#HADOOP VARIABLES END

Aktifkan

source ~/.bashrc

Perhatikan JAVA_HOME harus merupakan path sebelum '.../bin/' dibawah ini

javac -version
javac 1.7.0_51
which javac
/usr/bin/javac
readlink -f /usr/bin/javac
/usr/lib/jvm/java-7-openjdk-amd64/bin/javac

Maka JAVA_HOME adalah

/usr/lib/jvm/java-7-openjdk-amd64/

/usr/local/hadoop/etc/hadoop/hadoop-env.sh

set JAVA_HOME di file hadoop-env.sh

vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
# The java implementation to use.
# export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64


/usr/local/hadoop/etc/hadoop/core-site.xml

File /usr/local/hadoop/etc/hadoop/core-site.xml berisi konfigurasi properties yang digunakan Hadoop saat mulai beroperasi. File ini dapat digunakan untuk meng-override setting default yang ada.

sudo mkdir -p /app/hadoop/tmp
sudo chown hduser:hadoop /app/hadoop/tmp

Masuk ini antara tag <configuration></configuration>:

vi /usr/local/hadoop/etc/hadoop/core-site.xml
<configuration>
 <property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
 </property>

 <property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
 </property>
</configuration>


/usr/local/hadoop/etc/hadoop/mapred-site.xml

Default, folder /usr/local/hadoop/etc/hadoop/ berisi

/usr/local/hadoop/etc/hadoop/mapred-site.xml.template

yang harus di rename / di copy dengan nama mapred-site.xml:

cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml

File mapred-site.xml digunakan untuk menentukan framework yang digunakan untuk MapReduce. Kita perlu memasukan informasi berikut antara tag <configuration></configuration> :

vi /usr/local/hadoop/etc/hadoop/mapred-site.xml
<configuration>
 <property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
 </property>
</configuration>

/usr/local/hadoop/etc/hadoop/hdfs-site.xml

File /usr/local/hadoop/etc/hadoop/hdfs-site.xml perlu di konfigurasi di setiap host di cluster yang kita gunakan. File tersebut berisi directory yang digunakan sebagai node dan datanode di host tersebut.

Sebelum mengedit file, kita perlu membuat dua directory yang berisi namenode dan datanode untuk instalasi Hadoop ini. Ini dapat dilakukan menggunakan perintah:

sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode
sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode
sudo chown -R hduser:hadoop /usr/local/hadoop_store

Buka file, dam masukan konfigurasi antara tag <configuration></configuration>:

vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
 <property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
 </property>
 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
 </property>
</configuration>


Format Hadoop Filesystem

Hadoop file sistem perlu di format sebelum kita dapat menggunakannya. Perintah format perlu memperoleh ijin write di folder /usr/local/hadoop_store/hdfs/name-node.

hadoop namenode -format


DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

15/11/09 11:27:08 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = wdred/192.168.0.19
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.7.1
STARTUP_MSG:   classpath =
..
..
..
..
..
15/11/09 11:27:11 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/11/09 11:27:11 INFO namenode.FSNamesystem: HA Enabled: false
15/11/09 11:27:11 INFO namenode.FSNamesystem: Append Enabled: true
15/11/09 11:27:11 INFO util.GSet: Computing capacity for map INodeMap
15/11/09 11:27:11 INFO util.GSet: VM type       = 64-bit
15/11/09 11:27:11 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
15/11/09 11:27:11 INFO util.GSet: capacity      = 2^20 = 1048576 entries
15/11/09 11:27:11 INFO namenode.FSDirectory: ACLs enabled? false
15/11/09 11:27:11 INFO namenode.FSDirectory: XAttrs enabled? true
15/11/09 11:27:11 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
15/11/09 11:27:11 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/11/09 11:27:11 INFO util.GSet: Computing capacity for map cachedBlocks
15/11/09 11:27:11 INFO util.GSet: VM type       = 64-bit
15/11/09 11:27:11 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
15/11/09 11:27:11 INFO util.GSet: capacity      = 2^18 = 262144 entries
15/11/09 11:27:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/11/09 11:27:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/11/09 11:27:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
15/11/09 11:27:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
15/11/09 11:27:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
15/11/09 11:27:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
15/11/09 11:27:11 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/11/09 11:27:11 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and  retry cache entry expiry time is 600000 millis
15/11/09 11:27:11 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/11/09 11:27:11 INFO util.GSet: VM type       = 64-bit
15/11/09 11:27:11 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
15/11/09 11:27:11 INFO util.GSet: capacity      = 2^15 = 32768 entries
15/11/09 11:27:12 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1834961819-192.168.0.19-1447043232009
15/11/09 11:27:12 INFO common.Storage: Storage directory /usr/local/hadoop_store/hdfs/namenode has been successfully formatted.
15/11/09 11:27:12 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/11/09 11:27:12 INFO util.ExitUtil: Exiting with status 0
15/11/09 11:27:12 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at wdred/192.168.0.19
************************************************************/

Catatan perintah hadoop namenode -format harus di jalankan SEBELUM kita mengjalankan Hadoop. Jika perintah ini dijalankan sesudah Hadoop di jalankan, ini akan menghancurkan data yang ada di Hadoop file system.


Menjalankan Hadoop

Sekarang kita dapat mulai menjalankan single node cluster yang baru di instalasi. Kita dapat menggunakan

start-all.sh
start-dfs.sh
start-yarn.sh

Lakukan

cd /usr/local/hadoop/sbin
ls
distribute-exclude.sh    start-all.cmd        stop-balancer.sh
hadoop-daemon.sh         start-all.sh         stop-dfs.cmd
hadoop-daemons.sh        start-balancer.sh    stop-dfs.sh
hdfs-config.cmd          start-dfs.cmd        stop-secure-dns.sh
hdfs-config.sh           start-dfs.sh         stop-yarn.cmd
httpfs.sh                start-secure-dns.sh  stop-yarn.sh
kms.sh                   start-yarn.cmd       yarn-daemon.sh
mr-jobhistory-daemon.sh  start-yarn.sh        yarn-daemons.sh
refresh-namenodes.sh     stop-all.cmd
slaves.sh                stop-all.sh



sudo su hduser
start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/11/09 11:32:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-wdred.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-wdred.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is b4:47:39:22:4a:c1:fe:0a:af:28:a6:c2:9b:2f:4d:57.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-wdred.out
15/11/09 11:33:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-wdred.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-wdred.out


Cek, apakah berjalan dengan baik:

jps
7245 NameNode
7380 DataNode
8193 Jps
7895 NodeManager
7607 SecondaryNameNode
7758 ResourceManager

Cek

netstat -plten | grep java
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 0.0.0.0:50070           0.0.0.0:*               LISTEN      1001       16893       7245/java       
tcp        0      0 0.0.0.0:8088            0.0.0.0:*               LISTEN      1001       20864       7758/java       
tcp        0      0 127.0.0.1:54680         0.0.0.0:*               LISTEN      1001       18512       7380/java       
tcp        0      0 0.0.0.0:50010           0.0.0.0:*               LISTEN      1001       18506       7380/java       
tcp        0      0 0.0.0.0:50075           0.0.0.0:*               LISTEN      1001       18728       7380/java       
tcp        0      0 0.0.0.0:8030            0.0.0.0:*               LISTEN      1001       20855       7758/java       
tcp        0      0 0.0.0.0:8031            0.0.0.0:*               LISTEN      1001       20848       7758/java       
tcp        0      0 0.0.0.0:8032            0.0.0.0:*               LISTEN      1001       20860       7758/java       
tcp        0      0 0.0.0.0:8033            0.0.0.0:*               LISTEN      1001       21067       7758/java       
tcp        0      0 0.0.0.0:52739           0.0.0.0:*               LISTEN      1001       21059       7895/java       
tcp        0      0 0.0.0.0:50020           0.0.0.0:*               LISTEN      1001       18150       7380/java       
tcp        0      0 127.0.0.1:54310         0.0.0.0:*               LISTEN      1001       17639       7245/java       
tcp        0      0 0.0.0.0:8040            0.0.0.0:*               LISTEN      1001       22344       7895/java       
tcp        0      0 0.0.0.0:8042            0.0.0.0:*               LISTEN      1001       22348       7895/java       
tcp        0      0 0.0.0.0:50090           0.0.0.0:*               LISTEN      1001       19321       7607/java

Monitor Job & Task


Stop Hadoop

Kita dapat menjalankan

stop-all.sh
stop-dfs.sh
stop-yarn.sh


Lakukan

cd /usr/local/hadoop/sbin
stop-all.sh

Hadoop Web Interface

Let's start the Hadoop again and see its Web UI:

cd /usr/local/hadoop/sbin
start-all.sh

Akses ke

http://localhost:50070

Referensi