Hadoop: Hive untuk Query SQL

From OnnoWiki
Jump to navigation Jump to search

Sumber: http://hortonworks.com/hadoop/hive/

Apakah Hive

Hadoop dibangun untuk mengatur dan menyimpan sejumlah besar data dari berbagai bentuk, ukuran dan format. Karena arsitektur Hadoop "schema on read" arsitektur, cluster Hadoop adalah reservoir sempurna data-terstruktur dan tidak terstruktur-dari banyak sumber yang heterogen.

Analis data yang menggunakan Hive untuk mengeksplorasi, struktur dan menganalisis data itu, lalu mengubahnya menjadi wawasan bisnis yang dapat ditindaklanjuti.

Keuntungan menggunakan Hive untuk enterprise SQL di Hadoop:

Fitur Penjelasan
Familiar Query data with a SQL-based language
Fast Interactive response times, even over huge datasets
Scalable and Extensible As data variety and volume grows, more commodity machines can be added, without a corresponding reduction in performance

How Hive Works

The tables in Hive are similar to tables in a relational database, and data units are organized in a taxonomy from larger to more granular units. Databases are comprised of tables, which are made up of partitions. Data can be accessed via a simple query language and Hive supports overwriting or appending data.

Within a particular database, data in the tables is serialized and each table has a corresponding Hadoop Distributed File System (HDFS) directory. Each table can be sub-divided into partitions that determine how data is distributed within sub-directories of the table directory. Data within partitions can be further broken down into buckets.

Hive supports all the common primitive data formats such as BIGINT, BINARY, BOOLEAN, CHAR, DECIMAL, DOUBLE, FLOAT, INT, SMALLINT, STRING, TIMESTAMP, and TINYINT. In addition, analysts can combine primitive data types to form complex data types, such as structs, maps and arrays.





Referensi