Difference between revisions of "Hadoop: Instal di Ubuntu 14.04"
Onnowpurbo (talk | contribs) |
Onnowpurbo (talk | contribs) |
||
(19 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
Sumber: http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php | Sumber: http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php | ||
+ | |||
+ | |||
+ | ==Spesifikasi Hardware & OS== | ||
+ | |||
+ | Persiapan yang perlu di lakukan | ||
+ | |||
+ | * Harddisk WD Red 6TB untuk ujicoba | ||
+ | * PC Biasa 64bit | ||
+ | * Install Ubuntu 14.04 64bit | ||
+ | |||
Line 23: | Line 33: | ||
sudo addgroup hadoop | sudo addgroup hadoop | ||
sudo adduser --ingroup hadoop hduser | sudo adduser --ingroup hadoop hduser | ||
+ | sudo adduser hduser sudo | ||
+ | |||
+ | Adding user `hduser' to group `sudo' ... | ||
+ | Adding user hduser to group sudo | ||
+ | Done. | ||
+ | Jangan lupa tambahkan hduser sebagai sudoer. Paling gampang | ||
− | + | sudo su | |
+ | vi /etc/sudoers | ||
+ | |||
+ | Tambahkan | ||
− | + | hduser ALL=(ALL:ALL) ALL | |
− | |||
− | |||
==Install ssh== | ==Install ssh== | ||
Line 101: | Line 118: | ||
==Install Hadoop== | ==Install Hadoop== | ||
+ | Sebagai user biasa, lakukan: | ||
− | + | cd ~ | |
+ | wget http://mirrors.sonic.net/apache/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz | ||
+ | tar zxvf hadoop-2.7.1.tar.gz | ||
− | + | Proses download lumayan lama, karena file binary yang kita download sekitar 200+ Mbyte :( | |
− | |||
− | + | sudo su | |
+ | cd /home/hduser/ | ||
+ | mv hadoop-2.7.1 /usr/local/hadoop | ||
+ | chown -R hduser:hadoop /usr/local/hadoop | ||
− | + | ==Setup File Konfigurasi== | |
− | |||
− | |||
− | |||
− | + | File berikut perlu di modifikasi untuk menyelesaikan Setup Hadoop: | |
− | + | ~/.bashrc | |
+ | /usr/local/hadoop/etc/hadoop/hadoop-env.sh | ||
+ | /usr/local/hadoop/etc/hadoop/core-site.xml | ||
+ | /usr/local/hadoop/etc/hadoop/mapred-site.xml.template | ||
+ | /usr/local/hadoop/etc/hadoop/hdfs-site.xml | ||
− | + | ===~/.bashrc:=== | |
− | |||
− | + | Sebelum edit file .bashrc , kita perlu tahu path dimana Java di insyall dan perlu memasukan ke variabel JAVA_HOME environment: | |
− | |||
− | |||
− | |||
− | |||
− | + | update-alternatives --config java | |
− | + | There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java | |
+ | Nothing to configure. | ||
− | + | Selanjutnya tambahkan di akhir ~/.bashrc: | |
− | |||
+ | vi ~/.bashrc | ||
+ | #HADOOP VARIABLES START | ||
+ | export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/ | ||
+ | export HADOOP_INSTALL=/usr/local/hadoop | ||
+ | export HADOOP_HOME=/usr/local/hadoop/share/hadoop/common | ||
+ | export HADOOP_VERSION=2.7.1 | ||
+ | export PATH=$PATH:$HADOOP_INSTALL/bin | ||
+ | export PATH=$PATH:$HADOOP_INSTALL/sbin | ||
+ | export HADOOP_MAPRED_HOME=$HADOOP_INSTALL | ||
+ | export HADOOP_COMMON_HOME=$HADOOP_INSTALL | ||
+ | export HADOOP_HDFS_HOME=$HADOOP_INSTALL | ||
+ | export YARN_HOME=$HADOOP_INSTALL | ||
+ | export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native | ||
+ | export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib" | ||
+ | #HADOOP VARIABLES END | ||
+ | Aktifkan | ||
+ | source ~/.bashrc | ||
+ | Perhatikan JAVA_HOME harus merupakan path sebelum '.../bin/' dibawah ini | ||
+ | javac -version | ||
+ | javac 1.7.0_51 | ||
− | + | which javac | |
+ | /usr/bin/javac | ||
− | + | readlink -f /usr/bin/javac | |
+ | /usr/lib/jvm/java-7-openjdk-amd64/bin/javac | ||
− | + | Maka JAVA_HOME adalah | |
− | + | ||
− | + | /usr/lib/jvm/java-7-openjdk-amd64/ | |
− | |||
− | |||
− | + | ===/usr/local/hadoop/etc/hadoop/hadoop-env.sh=== | |
− | + | set JAVA_HOME di file hadoop-env.sh | |
− | + | vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh | |
− | |||
− | |||
− | + | # The java implementation to use. | |
+ | # export JAVA_HOME=${JAVA_HOME} | ||
+ | export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64 | ||
− | |||
− | + | ===/usr/local/hadoop/etc/hadoop/core-site.xml=== | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | File /usr/local/hadoop/etc/hadoop/core-site.xml berisi konfigurasi properties yang digunakan Hadoop saat mulai beroperasi. File ini dapat digunakan untuk meng-override setting default yang ada. | |
− | + | sudo mkdir -p /app/hadoop/tmp | |
+ | sudo chown hduser:hadoop /app/hadoop/tmp | ||
− | + | Masuk ini antara tag <configuration></configuration>: | |
− | |||
− | + | vi /usr/local/hadoop/etc/hadoop/core-site.xml | |
− | /usr/ | ||
− | + | <configuration> | |
− | / | + | <property> |
+ | <name>hadoop.tmp.dir</name> | ||
+ | <value>/app/hadoop/tmp</value> | ||
+ | <description>A base for other temporary directories.</description> | ||
+ | </property> | ||
+ | |||
+ | <property> | ||
+ | <name>fs.default.name</name> | ||
+ | <value>hdfs://localhost:54310</value> | ||
+ | <description>The name of the default file system. A URI whose | ||
+ | scheme and authority determine the FileSystem implementation. The | ||
+ | uri's scheme determines the config property (fs.SCHEME.impl) naming | ||
+ | the FileSystem implementation class. The uri's authority is used to | ||
+ | determine the host, port, etc. for a filesystem.</description> | ||
+ | </property> | ||
+ | </configuration> | ||
− | + | ===/usr/local/hadoop/etc/hadoop/mapred-site.xml=== | |
− | + | Default, folder /usr/local/hadoop/etc/hadoop/ berisi | |
− | + | /usr/local/hadoop/etc/hadoop/mapred-site.xml.template | |
− | + | yang harus di rename / di copy dengan nama mapred-site.xml: | |
− | + | cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml | |
+ | File mapred-site.xml digunakan untuk menentukan framework yang digunakan untuk MapReduce. Kita perlu memasukan informasi berikut antara tag <configuration></configuration> : | ||
− | + | vi /usr/local/hadoop/etc/hadoop/mapred-site.xml | |
− | + | <configuration> | |
− | + | <property> | |
+ | <name>mapred.job.tracker</name> | ||
+ | <value>localhost:54311</value> | ||
+ | <description>The host and port that the MapReduce job tracker runs | ||
+ | at. If "local", then jobs are run in-process as a single map | ||
+ | and reduce task. | ||
+ | </description> | ||
+ | </property> | ||
+ | </configuration> | ||
− | + | ===/usr/local/hadoop/etc/hadoop/hdfs-site.xml=== | |
− | |||
− | + | File /usr/local/hadoop/etc/hadoop/hdfs-site.xml perlu di konfigurasi di setiap host di cluster yang kita gunakan. File tersebut berisi directory yang digunakan sebagai node dan datanode di host tersebut. | |
− | + | Sebelum mengedit file, kita perlu membuat dua directory yang berisi namenode dan datanode untuk instalasi Hadoop ini. Ini dapat dilakukan menggunakan perintah: | |
− | + | sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode | |
− | + | sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode | |
− | + | sudo chown -R hduser:hadoop /usr/local/hadoop_store | |
− | |||
− | |||
− | |||
− | + | Buka file, dam masukan konfigurasi antara tag <configuration></configuration>: | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | </configuration> | ||
+ | vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml | ||
+ | <configuration> | ||
+ | <property> | ||
+ | <name>dfs.replication</name> | ||
+ | <value>1</value> | ||
+ | <description>Default block replication. | ||
+ | The actual number of replications can be specified when the file is created. | ||
+ | The default is used if replication is not specified in create time. | ||
+ | </description> | ||
+ | </property> | ||
+ | <property> | ||
+ | <name>dfs.namenode.name.dir</name> | ||
+ | <value>file:/usr/local/hadoop_store/hdfs/namenode</value> | ||
+ | </property> | ||
+ | <property> | ||
+ | <name>dfs.datanode.data.dir</name> | ||
+ | <value>file:/usr/local/hadoop_store/hdfs/datanode</value> | ||
+ | </property> | ||
+ | </configuration> | ||
− | |||
− | |||
− | |||
− | |||
− | + | ==Format Hadoop Filesystem== | |
− | + | Hadoop file sistem perlu di format sebelum kita dapat menggunakannya. | |
− | + | Perintah format perlu memperoleh ijin write di folder /usr/local/hadoop_store/hdfs/name-node. | |
− | + | hadoop namenode -format | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
+ | DEPRECATED: Use of this script to execute hdfs command is deprecated. | ||
+ | Instead use the hdfs command for it. | ||
+ | |||
+ | 15/11/09 11:27:08 INFO namenode.NameNode: STARTUP_MSG: | ||
+ | /************************************************************ | ||
+ | STARTUP_MSG: Starting NameNode | ||
+ | STARTUP_MSG: host = wdred/192.168.0.19 | ||
+ | STARTUP_MSG: args = [-format] | ||
+ | STARTUP_MSG: version = 2.7.1 | ||
+ | STARTUP_MSG: classpath = | ||
+ | .. | ||
+ | .. | ||
+ | .. | ||
+ | .. | ||
+ | .. | ||
+ | 15/11/09 11:27:11 INFO namenode.FSNamesystem: isPermissionEnabled = true | ||
+ | 15/11/09 11:27:11 INFO namenode.FSNamesystem: HA Enabled: false | ||
+ | 15/11/09 11:27:11 INFO namenode.FSNamesystem: Append Enabled: true | ||
+ | 15/11/09 11:27:11 INFO util.GSet: Computing capacity for map INodeMap | ||
+ | 15/11/09 11:27:11 INFO util.GSet: VM type = 64-bit | ||
+ | 15/11/09 11:27:11 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB | ||
+ | 15/11/09 11:27:11 INFO util.GSet: capacity = 2^20 = 1048576 entries | ||
+ | 15/11/09 11:27:11 INFO namenode.FSDirectory: ACLs enabled? false | ||
+ | 15/11/09 11:27:11 INFO namenode.FSDirectory: XAttrs enabled? true | ||
+ | 15/11/09 11:27:11 INFO namenode.FSDirectory: Maximum size of an xattr: 16384 | ||
+ | 15/11/09 11:27:11 INFO namenode.NameNode: Caching file names occuring more than 10 times | ||
+ | 15/11/09 11:27:11 INFO util.GSet: Computing capacity for map cachedBlocks | ||
+ | 15/11/09 11:27:11 INFO util.GSet: VM type = 64-bit | ||
+ | 15/11/09 11:27:11 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB | ||
+ | 15/11/09 11:27:11 INFO util.GSet: capacity = 2^18 = 262144 entries | ||
+ | 15/11/09 11:27:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 | ||
+ | 15/11/09 11:27:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 | ||
+ | 15/11/09 11:27:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 | ||
+ | 15/11/09 11:27:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 | ||
+ | 15/11/09 11:27:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 | ||
+ | 15/11/09 11:27:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 | ||
+ | 15/11/09 11:27:11 INFO namenode.FSNamesystem: Retry cache on namenode is enabled | ||
+ | 15/11/09 11:27:11 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis | ||
+ | 15/11/09 11:27:11 INFO util.GSet: Computing capacity for map NameNodeRetryCache | ||
+ | 15/11/09 11:27:11 INFO util.GSet: VM type = 64-bit | ||
+ | 15/11/09 11:27:11 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB | ||
+ | 15/11/09 11:27:11 INFO util.GSet: capacity = 2^15 = 32768 entries | ||
+ | 15/11/09 11:27:12 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1834961819-192.168.0.19-1447043232009 | ||
+ | 15/11/09 11:27:12 INFO common.Storage: Storage directory /usr/local/hadoop_store/hdfs/namenode has been successfully formatted. | ||
+ | 15/11/09 11:27:12 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 | ||
+ | 15/11/09 11:27:12 INFO util.ExitUtil: Exiting with status 0 | ||
+ | 15/11/09 11:27:12 INFO namenode.NameNode: SHUTDOWN_MSG: | ||
+ | /************************************************************ | ||
+ | SHUTDOWN_MSG: Shutting down NameNode at wdred/192.168.0.19 | ||
+ | ************************************************************/ | ||
+ | Catatan perintah hadoop namenode -format harus di jalankan SEBELUM kita mengjalankan Hadoop. Jika perintah ini dijalankan sesudah Hadoop di jalankan, ini akan menghancurkan data yang ada di Hadoop file system. | ||
− | |||
− | + | ==Menjalankan Hadoop== | |
− | |||
− | + | Sekarang kita dapat mulai menjalankan single node cluster yang baru di instalasi. | |
− | + | Kita dapat menggunakan | |
− | + | start-all.sh | |
− | + | start-dfs.sh | |
− | + | start-yarn.sh | |
− | + | Lakukan | |
− | + | cd /usr/local/hadoop/sbin | |
+ | ls | ||
− | + | distribute-exclude.sh start-all.cmd stop-balancer.sh | |
− | + | hadoop-daemon.sh start-all.sh stop-dfs.cmd | |
− | + | hadoop-daemons.sh start-balancer.sh stop-dfs.sh | |
− | + | hdfs-config.cmd start-dfs.cmd stop-secure-dns.sh | |
− | + | hdfs-config.sh start-dfs.sh stop-yarn.cmd | |
− | + | httpfs.sh start-secure-dns.sh stop-yarn.sh | |
− | + | kms.sh start-yarn.cmd yarn-daemon.sh | |
− | + | mr-jobhistory-daemon.sh start-yarn.sh yarn-daemons.sh | |
− | + | refresh-namenodes.sh stop-all.cmd | |
− | + | slaves.sh stop-all.sh | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
+ | sudo su hduser | ||
+ | start-all.sh | ||
+ | This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh | ||
+ | 15/11/09 11:32:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | ||
+ | Starting namenodes on [localhost] | ||
+ | localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-wdred.out | ||
+ | localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-wdred.out | ||
+ | Starting secondary namenodes [0.0.0.0] | ||
+ | The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established. | ||
+ | ECDSA key fingerprint is b4:47:39:22:4a:c1:fe:0a:af:28:a6:c2:9b:2f:4d:57. | ||
+ | Are you sure you want to continue connecting (yes/no)? yes | ||
+ | 0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts. | ||
+ | 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-wdred.out | ||
+ | 15/11/09 11:33:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | ||
+ | starting yarn daemons | ||
+ | starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-wdred.out | ||
+ | localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-wdred.out | ||
− | + | Cek, apakah berjalan dengan baik: | |
− | + | jps | |
− | |||
− | + | 7245 NameNode | |
− | + | 7380 DataNode | |
− | + | 8193 Jps | |
+ | 7895 NodeManager | ||
+ | 7607 SecondaryNameNode | ||
+ | 7758 ResourceManager | ||
− | + | Cek | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
+ | netstat -plten | grep java | ||
− | + | (Not all processes could be identified, non-owned process info | |
− | + | will not be shown, you would have to be root to see it all.) | |
+ | tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 1001 16893 7245/java | ||
+ | tcp 0 0 0.0.0.0:8088 0.0.0.0:* LISTEN 1001 20864 7758/java | ||
+ | tcp 0 0 127.0.0.1:54680 0.0.0.0:* LISTEN 1001 18512 7380/java | ||
+ | tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN 1001 18506 7380/java | ||
+ | tcp 0 0 0.0.0.0:50075 0.0.0.0:* LISTEN 1001 18728 7380/java | ||
+ | tcp 0 0 0.0.0.0:8030 0.0.0.0:* LISTEN 1001 20855 7758/java | ||
+ | tcp 0 0 0.0.0.0:8031 0.0.0.0:* LISTEN 1001 20848 7758/java | ||
+ | tcp 0 0 0.0.0.0:8032 0.0.0.0:* LISTEN 1001 20860 7758/java | ||
+ | tcp 0 0 0.0.0.0:8033 0.0.0.0:* LISTEN 1001 21067 7758/java | ||
+ | tcp 0 0 0.0.0.0:52739 0.0.0.0:* LISTEN 1001 21059 7895/java | ||
+ | tcp 0 0 0.0.0.0:50020 0.0.0.0:* LISTEN 1001 18150 7380/java | ||
+ | tcp 0 0 127.0.0.1:54310 0.0.0.0:* LISTEN 1001 17639 7245/java | ||
+ | tcp 0 0 0.0.0.0:8040 0.0.0.0:* LISTEN 1001 22344 7895/java | ||
+ | tcp 0 0 0.0.0.0:8042 0.0.0.0:* LISTEN 1001 22348 7895/java | ||
+ | tcp 0 0 0.0.0.0:50090 0.0.0.0:* LISTEN 1001 19321 7607/java | ||
+ | ==Monitor Job & Task== | ||
+ | * NameNode daemon: http://hdnode01:50070 | ||
+ | * JobTracker daemon: http://hdnode01:50030 | ||
+ | * TaskTracker daemon: http://hdnode01:50060 | ||
− | |||
− | + | ==Stop Hadoop== | |
− | |||
− | + | Kita dapat menjalankan | |
− | + | stop-all.sh | |
− | + | stop-dfs.sh | |
− | + | stop-yarn.sh | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | Lakukan | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | cd /usr/local/hadoop/sbin | |
− | + | stop-all.sh | |
− | |||
− | |||
− | |||
− | |||
− | + | ==Hadoop Web Interface== | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | Hadoop Web | ||
Let's start the Hadoop again and see its Web UI: | Let's start the Hadoop again and see its Web UI: | ||
− | + | cd /usr/local/hadoop/sbin | |
− | + | start-all.sh | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | Akses ke | |
− | + | http://localhost:50070 | |
==Referensi== | ==Referensi== | ||
* http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php | * http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php |
Latest revision as of 08:10, 10 November 2015
Sumber: http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php
Spesifikasi Hardware & OS
Persiapan yang perlu di lakukan
- Harddisk WD Red 6TB untuk ujicoba
- PC Biasa 64bit
- Install Ubuntu 14.04 64bit
Instalasi Java
Hadoop framework is written in Java!!
cd ~ sudo apt-get update sudo locale-gen id_ID.UTF-8 sudo apt-get install default-jdk
cek versi
java -version
java version "1.7.0_51" OpenJDK Runtime Environment (IcedTea 2.4.6) (7u51-2.4.6-1ubuntu4) OpenJDK 64-Bit Server VM (build 24.51-b03, mixed mode)
Tambahkan Hadoop user
sudo addgroup hadoop sudo adduser --ingroup hadoop hduser sudo adduser hduser sudo
Adding user `hduser' to group `sudo' ... Adding user hduser to group sudo Done.
Jangan lupa tambahkan hduser sebagai sudoer. Paling gampang
sudo su vi /etc/sudoers
Tambahkan
hduser ALL=(ALL:ALL) ALL
Install ssh
Instalasi ssh
sudo apt-get install ssh
Cek ssh
which ssh
/usr/bin/ssh
Cek sshd
which sshd
/usr/sbin/sshd
Buat & Setup Sertifikat SSH
Hadoop membutuhkan akses SSH untuk memanage nodes (mesin remote & lokal). Kita akan konfigurasi agar mengijinkan authentikasi menggunakan SSH public key.
su hduser ssh-keygen -t rsa -P ""
Generating public/private rsa key pair. Enter file in which to save the key (/home/hduser/.ssh/id_rsa): Created directory '/home/hduser/.ssh'. Your identification has been saved in /home/hduser/.ssh/id_rsa. Your public key has been saved in /home/hduser/.ssh/id_rsa.pub. The key fingerprint is: 5c:4e:51:87:9f:00:64:a9:42:40:28:f1:b7:39:c5:04 hduser@wdred The key's randomart image is: +--[ RSA 2048]----+ |.. oEo. .=+... | |... o. ...o. | | .. ..o .o o . | | . +...+ o | | + .S . | | . | | | | | | | +-----------------+
Lakukan
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
agar Hadoop menambahkan public key yang baru kita buat agar masuk ke key yang di authorisasi agar kita tidak perlu pakai password untuk ssh.
Cek ke localhost
ssh localhost
The authenticity of host 'localhost (127.0.0.1)' can't be established. ECDSA key fingerprint is b4:47:39:22:4a:c1:fe:0a:af:28:a6:c2:9b:2f:4d:57. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts. Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-24-generic x86_64) .. ..
Install Hadoop
Sebagai user biasa, lakukan:
cd ~ wget http://mirrors.sonic.net/apache/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz tar zxvf hadoop-2.7.1.tar.gz
Proses download lumayan lama, karena file binary yang kita download sekitar 200+ Mbyte :(
sudo su cd /home/hduser/ mv hadoop-2.7.1 /usr/local/hadoop chown -R hduser:hadoop /usr/local/hadoop
Setup File Konfigurasi
File berikut perlu di modifikasi untuk menyelesaikan Setup Hadoop:
~/.bashrc /usr/local/hadoop/etc/hadoop/hadoop-env.sh /usr/local/hadoop/etc/hadoop/core-site.xml /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/hdfs-site.xml
~/.bashrc:
Sebelum edit file .bashrc , kita perlu tahu path dimana Java di insyall dan perlu memasukan ke variabel JAVA_HOME environment:
update-alternatives --config java
There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java Nothing to configure.
Selanjutnya tambahkan di akhir ~/.bashrc:
vi ~/.bashrc
#HADOOP VARIABLES START export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/ export HADOOP_INSTALL=/usr/local/hadoop export HADOOP_HOME=/usr/local/hadoop/share/hadoop/common export HADOOP_VERSION=2.7.1 export PATH=$PATH:$HADOOP_INSTALL/bin export PATH=$PATH:$HADOOP_INSTALL/sbin export HADOOP_MAPRED_HOME=$HADOOP_INSTALL export HADOOP_COMMON_HOME=$HADOOP_INSTALL export HADOOP_HDFS_HOME=$HADOOP_INSTALL export YARN_HOME=$HADOOP_INSTALL export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib" #HADOOP VARIABLES END
Aktifkan
source ~/.bashrc
Perhatikan JAVA_HOME harus merupakan path sebelum '.../bin/' dibawah ini
javac -version javac 1.7.0_51
which javac /usr/bin/javac
readlink -f /usr/bin/javac /usr/lib/jvm/java-7-openjdk-amd64/bin/javac
Maka JAVA_HOME adalah
/usr/lib/jvm/java-7-openjdk-amd64/
/usr/local/hadoop/etc/hadoop/hadoop-env.sh
set JAVA_HOME di file hadoop-env.sh
vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
# The java implementation to use. # export JAVA_HOME=${JAVA_HOME} export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
/usr/local/hadoop/etc/hadoop/core-site.xml
File /usr/local/hadoop/etc/hadoop/core-site.xml berisi konfigurasi properties yang digunakan Hadoop saat mulai beroperasi. File ini dapat digunakan untuk meng-override setting default yang ada.
sudo mkdir -p /app/hadoop/tmp sudo chown hduser:hadoop /app/hadoop/tmp
Masuk ini antara tag <configuration></configuration>:
vi /usr/local/hadoop/etc/hadoop/core-site.xml
<configuration> <property> <name>hadoop.tmp.dir</name> <value>/app/hadoop/tmp</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:54310</value> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> </property> </configuration>
/usr/local/hadoop/etc/hadoop/mapred-site.xml
Default, folder /usr/local/hadoop/etc/hadoop/ berisi
/usr/local/hadoop/etc/hadoop/mapred-site.xml.template
yang harus di rename / di copy dengan nama mapred-site.xml:
cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml
File mapred-site.xml digunakan untuk menentukan framework yang digunakan untuk MapReduce. Kita perlu memasukan informasi berikut antara tag <configuration></configuration> :
vi /usr/local/hadoop/etc/hadoop/mapred-site.xml
<configuration> <property> <name>mapred.job.tracker</name> <value>localhost:54311</value> <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> </property> </configuration>
/usr/local/hadoop/etc/hadoop/hdfs-site.xml
File /usr/local/hadoop/etc/hadoop/hdfs-site.xml perlu di konfigurasi di setiap host di cluster yang kita gunakan. File tersebut berisi directory yang digunakan sebagai node dan datanode di host tersebut.
Sebelum mengedit file, kita perlu membuat dua directory yang berisi namenode dan datanode untuk instalasi Hadoop ini. Ini dapat dilakukan menggunakan perintah:
sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode sudo chown -R hduser:hadoop /usr/local/hadoop_store
Buka file, dam masukan konfigurasi antara tag <configuration></configuration>:
vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>1</value> <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/hadoop_store/hdfs/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/hadoop_store/hdfs/datanode</value> </property> </configuration>
Format Hadoop Filesystem
Hadoop file sistem perlu di format sebelum kita dapat menggunakannya. Perintah format perlu memperoleh ijin write di folder /usr/local/hadoop_store/hdfs/name-node.
hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 15/11/09 11:27:08 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = wdred/192.168.0.19 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.7.1 STARTUP_MSG: classpath = .. .. .. .. .. 15/11/09 11:27:11 INFO namenode.FSNamesystem: isPermissionEnabled = true 15/11/09 11:27:11 INFO namenode.FSNamesystem: HA Enabled: false 15/11/09 11:27:11 INFO namenode.FSNamesystem: Append Enabled: true 15/11/09 11:27:11 INFO util.GSet: Computing capacity for map INodeMap 15/11/09 11:27:11 INFO util.GSet: VM type = 64-bit 15/11/09 11:27:11 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB 15/11/09 11:27:11 INFO util.GSet: capacity = 2^20 = 1048576 entries 15/11/09 11:27:11 INFO namenode.FSDirectory: ACLs enabled? false 15/11/09 11:27:11 INFO namenode.FSDirectory: XAttrs enabled? true 15/11/09 11:27:11 INFO namenode.FSDirectory: Maximum size of an xattr: 16384 15/11/09 11:27:11 INFO namenode.NameNode: Caching file names occuring more than 10 times 15/11/09 11:27:11 INFO util.GSet: Computing capacity for map cachedBlocks 15/11/09 11:27:11 INFO util.GSet: VM type = 64-bit 15/11/09 11:27:11 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB 15/11/09 11:27:11 INFO util.GSet: capacity = 2^18 = 262144 entries 15/11/09 11:27:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 15/11/09 11:27:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 15/11/09 11:27:11 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 15/11/09 11:27:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 15/11/09 11:27:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 15/11/09 11:27:11 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 15/11/09 11:27:11 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 15/11/09 11:27:11 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 15/11/09 11:27:11 INFO util.GSet: Computing capacity for map NameNodeRetryCache 15/11/09 11:27:11 INFO util.GSet: VM type = 64-bit 15/11/09 11:27:11 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB 15/11/09 11:27:11 INFO util.GSet: capacity = 2^15 = 32768 entries 15/11/09 11:27:12 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1834961819-192.168.0.19-1447043232009 15/11/09 11:27:12 INFO common.Storage: Storage directory /usr/local/hadoop_store/hdfs/namenode has been successfully formatted. 15/11/09 11:27:12 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 15/11/09 11:27:12 INFO util.ExitUtil: Exiting with status 0 15/11/09 11:27:12 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at wdred/192.168.0.19 ************************************************************/
Catatan perintah hadoop namenode -format harus di jalankan SEBELUM kita mengjalankan Hadoop. Jika perintah ini dijalankan sesudah Hadoop di jalankan, ini akan menghancurkan data yang ada di Hadoop file system.
Menjalankan Hadoop
Sekarang kita dapat mulai menjalankan single node cluster yang baru di instalasi. Kita dapat menggunakan
start-all.sh start-dfs.sh start-yarn.sh
Lakukan
cd /usr/local/hadoop/sbin ls
distribute-exclude.sh start-all.cmd stop-balancer.sh hadoop-daemon.sh start-all.sh stop-dfs.cmd hadoop-daemons.sh start-balancer.sh stop-dfs.sh hdfs-config.cmd start-dfs.cmd stop-secure-dns.sh hdfs-config.sh start-dfs.sh stop-yarn.cmd httpfs.sh start-secure-dns.sh stop-yarn.sh kms.sh start-yarn.cmd yarn-daemon.sh mr-jobhistory-daemon.sh start-yarn.sh yarn-daemons.sh refresh-namenodes.sh stop-all.cmd slaves.sh stop-all.sh
sudo su hduser start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 15/11/09 11:32:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [localhost] localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-wdred.out localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-wdred.out Starting secondary namenodes [0.0.0.0] The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established. ECDSA key fingerprint is b4:47:39:22:4a:c1:fe:0a:af:28:a6:c2:9b:2f:4d:57. Are you sure you want to continue connecting (yes/no)? yes 0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts. 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-wdred.out 15/11/09 11:33:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable starting yarn daemons starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-wdred.out localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-wdred.out
Cek, apakah berjalan dengan baik:
jps
7245 NameNode 7380 DataNode 8193 Jps 7895 NodeManager 7607 SecondaryNameNode 7758 ResourceManager
Cek
netstat -plten | grep java
(Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 1001 16893 7245/java tcp 0 0 0.0.0.0:8088 0.0.0.0:* LISTEN 1001 20864 7758/java tcp 0 0 127.0.0.1:54680 0.0.0.0:* LISTEN 1001 18512 7380/java tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN 1001 18506 7380/java tcp 0 0 0.0.0.0:50075 0.0.0.0:* LISTEN 1001 18728 7380/java tcp 0 0 0.0.0.0:8030 0.0.0.0:* LISTEN 1001 20855 7758/java tcp 0 0 0.0.0.0:8031 0.0.0.0:* LISTEN 1001 20848 7758/java tcp 0 0 0.0.0.0:8032 0.0.0.0:* LISTEN 1001 20860 7758/java tcp 0 0 0.0.0.0:8033 0.0.0.0:* LISTEN 1001 21067 7758/java tcp 0 0 0.0.0.0:52739 0.0.0.0:* LISTEN 1001 21059 7895/java tcp 0 0 0.0.0.0:50020 0.0.0.0:* LISTEN 1001 18150 7380/java tcp 0 0 127.0.0.1:54310 0.0.0.0:* LISTEN 1001 17639 7245/java tcp 0 0 0.0.0.0:8040 0.0.0.0:* LISTEN 1001 22344 7895/java tcp 0 0 0.0.0.0:8042 0.0.0.0:* LISTEN 1001 22348 7895/java tcp 0 0 0.0.0.0:50090 0.0.0.0:* LISTEN 1001 19321 7607/java
Monitor Job & Task
- NameNode daemon: http://hdnode01:50070
- JobTracker daemon: http://hdnode01:50030
- TaskTracker daemon: http://hdnode01:50060
Stop Hadoop
Kita dapat menjalankan
stop-all.sh stop-dfs.sh stop-yarn.sh
Lakukan
cd /usr/local/hadoop/sbin stop-all.sh
Hadoop Web Interface
Let's start the Hadoop again and see its Web UI:
cd /usr/local/hadoop/sbin start-all.sh
Akses ke
http://localhost:50070