📜 ⬆️ ⬇️

Hadoop Part 1: Deploying a Cluster

hadoop

The continuous growth of data and an increase in the speed of their generation raises the problem of their processing and storage. Not surprisingly, the “Big Data” theme is one of the most talked about in today's IT community.

There are quite a lot of materials on the theory of “big data” in specialized journals and on websites today. But from theoretical publications it is far from always clear how one can use appropriate technologies to solve specific practical problems.
')
One of the most well-known and discussed projects in the field of distributed computing is Hadoop, a freely distributed set of utilities, libraries and a framework for developing and executing distributed computing programs developed by the Apache Software Foundation.

We have been using Hadoop for a long time to solve our own practical problems. The results of our work in this area are worth sharing with the general public. This article is the first in a series about Hadoop. Today we will talk about the history and structure of the Hadoop project, and also show, using the example of the Hadoop Cloudera distribution, how the cluster is deployed and configured.

Carefully, under a cat a lot of traffic.

A bit of history


Hadoop author - Doug Cutting, creator of Apache Lucene, the famous text search library. The name of the project is the name that Doug's son came up with for his plush yellow elephant.

Cutting created Hadoop while working on the Nutch project, an open source web search system. The Nutch project was launched in 2002, but very soon its developers realized that the existing architecture is unlikely to scale to billions of web pages. In 2003, an article was published describing the distributed file system GFS (Google File System) used in Google projects. Such a system could easily cope with the task of storing large files generated by crawling and indexing sites. In 2004, the Nutch development team took up the implementation of such an open source system - NDFS (Nutch Distributed File System).

In 2004, Google introduced the MapReduce technology to a wide audience. Already at the beginning of 2005, Nutch developers created a full-fledged implementation of MapReduce based on Nutch; shortly thereafter, all the main Nutch algorithms were adapted to use MapReduce and NDFS. In 2006, Hadoop was allocated to an independent subproject under the Lucene project.

In 2008, Hadoop became one of the leading Apache projects. By that time, it was already successfully used in companies such as Yahoo !, Facebook and Last.Fm. Today, Hadoop is widely used both in commercial companies and in scientific and educational institutions.

Hadoop project structure


The Hadoop project includes the following subprojects:


Previously, Hadoop included other subprojects, which are now stand-alone products of the Apache Software Foundation:


Hadoop distributions


Today Hadoop is a complex system consisting of a large number of components. Installing and configuring such a system yourself is a very difficult task. Therefore, many companies today offer ready-made Hadoop distributions, which include deployment, administration and monitoring tools.

Hadoop distributions are distributed both under commercial (products of companies such as Intel, IBM, EMC, Oracle) and under free (products of Cloudera, Hortonworks and MapR companies) licenses. We will tell you more about the Cloudera Hadoop distribution.

Cloudera hadoop


Cloudera Hadoop is a completely open distribution created with the active participation of Apache Hadoop developers Doug Cutting and Mike Cafarella. It is distributed in both free and paid version, known as Cloudera Enterprise.

At the time when we became interested in the Hadoop project, Cloudera provided the most complete and complete solution among the open source Hadoop distributions. For all the time there was not a single significant problem, and the cluster successfully survived several major updates, which were fully automatic. And now, after almost a year of experiments, we can say that we are satisfied with the choice made.

Cloudera Hadoop includes the following main components:


The components of Cloudera Hadoop are distributed in binary packages called parsels . Compared to standard packages and package managers, parsels have the following advantages:


Hardware requirements


Hardware requirements for deploying Hadoop is a fairly complex topic. Different requirements are made to different nodes within the cluster. You can read more about this, for example, in the recommendations of Intel or in the blog of Cloudera. The general rule: more memory and disks! RAID controllers and other enterprise joys are not necessary due to the very architecture of Hadoop and HDFS, designed to work on typical simple servers. The use of 10GB network cards is justified with data volumes of more than 12TB per node.

The Cloudera blog provides the following list of hardware configurations for various boot options:

CPU-disk

Note that in the case of server rental, losses from a poorly chosen configuration are not as terrible as when purchasing their servers. If necessary, you can modify the leased servers or replace them with more suitable for your tasks.

We proceed directly to the installation of our cluster.

Install and configure the OS


For all servers we will use CentOS 6.4 in a minimal installation, but other distributions can be used: Debian, Ubuntu, RHEL, etc. Required packages are publicly available at archive.cloudera.com and are installed by standard package managers.

On the Cloudera Manager server, we recommend using software or hardware RAID1 and one root partition, you can put it on a separate partition / var / log /. On servers that will be added to the hadoop cluster, we recommend creating two partitions:


On all servers, including the Cloudera Manager server, you need to disable SELinux and the firewall. Of course, you can not do this, but then you have to spend a lot of time and effort to fine-tune security policies. To ensure security, it is recommended to isolate the cluster as much as possible from the outside world at the network level, for example, using a hardware firewall or an isolated VLAN (access to mirrors through a local proxy).

  # vi / etc / selinux / config # turn off SElinux
 SELINUX = disabled
 # system-config-firewall-tui # turn off the firewall and save the settings
 # reboot 


We offer examples of ready-made kickstart files for the automatic installation of Cloudera Manager servers and cluster nodes.
Example cloudera_manager.ks
 install text reboot ### General url --url http://mirror.selectel.ru/centos/6.4/os/x86_64 # disable SELinux for CDH selinux --disabled rootpw supersecretpasswrd authconfig --enableshadow --enablemd5 # Networking firewall - -disabled network --bootproto = static --device = eth0 --ip = 1.2.3.254 --netmask = 255.255.255.0 --gateway = 1.2.3.1 --nameserver = 188.93.16.19.109.234.159.91,188.93.17.19 - -hostname = cm.example.net # Regional keyboard us lang en_US.UTF-8 timezone Europe / Moscow ### Partitioning zerombr yes bootloader --location = mbr --driveorder = sda, sdb clearpart --all --initlabel part raid .11 --size 1024 --asprimary --ondrive = sda part raid.12 --size 1 --grow --asprimary --ondrive = sda part raid.21 --size 1024 --asprimary --ondrive = sdb part raid.22 --size 1 --grow --asprimary --ondrive = sdb raid / boot --fstype ext3 --device md0 --level = RAID1 raid.11 raid.21 raid pv.01 --fstype ext3 - device md1 - level = RAID1 raid.12 raid.22 volgroup vg0 pv.01 logvol swap - vgname = vg0 - size = 12288 - name = swap - fstyp  e = ext4 logvol / --vgname = vg0 --size = 1 --grow --name = vg0-root --fstype = ext4% packages @Base wget ntp% post - erroronfail chkconfig ntpd on wget -q -O / etc / yum.repos.d / cloudera-manager.repo http://archive.cloudera.com/cm4/redhat/6/x86_64/cm/cloudera-manager.repo rpm --import http: //archive.cloudera. com / cdh4 / redhat / 6 / x86_64 / cdh / RPM-GPG-KEY-cloudera yum -y install jdk yum -y install cloudera-manager-daemons yum -y install cloudera-manager-server yum -y install cloudera-manager- server-db 


Example node.ks
 install
 text
 reboot

 ### General
 url --url http://mirror.selectel.ru/centos/6.4/os/x86_64
 # disable SELinux for CDH
 selinux --disabled
 rootpw nodeunifiedpasswd
 authconfig --enableshadow --enablemd5
 # Networking
 firewall --disabled
 network --bootproto = static --device = eth0 --ip = 1.2.3.10 --netmask = 255.255.255.0 --gateway = 1.2.3.1 --nameserver = 188.93.16.19,109.234.159.91,188.93.17.19 --hostname = node.example.net
 # Regional
 keyboard us
 lang en_US.UTF-8
 timezone Europe / Moscow

 ### Partitioning
 zerombr yes
 bootloader --location = mbr --driveorder = sda
 clearpart --all - initlabel
 part / boot --fstype ext3 --size 1024 --asprimary --ondrive = sda
 part pv.01 --fstype ext3 --size 1 --grow --asprimary --ondrive = sda
 # repeat for every hard drive 
 part pv.01 --fstype ext3 --size 1 --grow --asprimary --ondrive = sdb
 part pv.01 --fstype ext3 --size 1 --grow --asprimary --ondrive = sdc

 volgroup vg0 pv.01     
 logvol swap --vgname = vg0 --size = 512 --name = swap --fstype = ext4
 logvol / --vgname = vg0 --size = 51200 --name = vg0-root --fstype = ext4
 logvol / dfs --vgname = vg0 --size = 1 --grow --name = dfs --fstype = ext4

 % packages
 @Base
 wget
 ntp

 % post --erroronfail
 chkconfig ntpd on



Installing Cloudera Manager


Let's start by installing Cloudera Manager, which will then deploy and configure our future Hadoop cluster on servers.

Before installation, you must make sure that:


Add a Cloudera mirror and install the necessary packages:
  # wget -q -O /etc/yum.repos.d/cloudera-manager.repo http://archive.cloudera.com/cm4/redhat/6/x86_64/cm/cloudera-manager.repo
 # rpm --import http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera
 # yum -y install jdk
 # yum -y install cloudera-manager-daemons
 # yum -y install cloudera-manager-server
 # yum -y install cloudera-manager-server-db 


At the end of the installation, we launch the standard database (for simplicity, we will use it, although you can connect any third-party) and the CM service itself:

  # /etc/init.d/cloudera-scm-server-db start
 # /etc/init.d/cloudera-scm-server start 


Deploying Cloudera Hadoop Cluster


After installing Cloudera Manager, you can forget about the console, all further interaction with the cluster we will implement using the Cloudera Manager web interface. By default, Cloudera Manager uses port 7180. You can use either the DNS name or the IP address of the server. We enter this address in the browser line.
A login window will appear on the screen. Login and password for login - standard (admin, admin). Of course, they need to be changed immediately.
A window opens with an offer to choose the version of Cloudera Hadoop: free, trial for 60 days or a paid license:

01

Select the free (Cloudera Standard) version. Trial or paid license can be activated later at any time when you are already comfortable with the work with the cluster.

During installation, the Cloudera Manager service will connect over SSH to the servers in the cluster; it performs all actions on the servers on behalf of the user specified in the menu, the default is root.

Next, Cloudera Manager will ask you to specify the addresses of the hosts where Cloudera Hadoop will be installed:

03

Addresses can be specified by list and by mask, for example:


After that click on the Search button. Cloudera Manager will detect the specified hosts, and a list of them will be displayed on the screen:

04

We check once again whether all the necessary hosts are included in this list (you can add new hosts by clicking the New Search button). Then click on the Continue button. The repository selection window will open:

05

As a method of installation, we recommend choosing the installation with parsels, we have already described their advantages earlier. Parsels are installed from the archive.cloudera.org repository. In addition to the CDH parsell, you can install the SOLR search tool and a database based on Hadoop IMPALA from the same repository.

After selecting the installments for installation, click on the Continue button. In the next window, specify the parameters for access via SSH (login, password or private key, port number for connection):

06

After that click on the button Continue. The installation process will start:

07

Upon completion of the installation, a table with a summary of the installed components and their versions will be displayed on the screen:

14

Once again, we check whether everything is in order, and click on the Continue button. A window will appear asking you to select the components and services of Cloudera Hadoop to be installed:

sixteen

For example, we will install all the components by selecting the “All Services” option, later it will be possible to install or delete any services. Now you need to specify which components of Cloudera Hadoop will be installed on specific hosts. We recommend that you trust the default selection; in more detail, recommendations on the location of roles on nodes can be found in the documentation for a specific service.

17

Click the Continue button and proceed to the next step - setting up the database:

nineteen

By default, all information related to monitoring and managing the system is stored in the PostgreSQL database that we installed with the Cloudera Manager. Other databases can also be used - in this case, select Use Custom Database from the menu. After setting the necessary parameters, we check the connection with the “Test Connection” database, and if successful, click on the “Continue” button to proceed to setting up the elements in the cluster:

20

Click the Continue button and start the cluster setup process. The adjustment is displayed on the screen:

21

When the configuration of all components is completed, go to the dashboard of our cluster. For example, here is the dashboard of our test cluster:

status

Instead of conclusion


In this article, we tried to introduce you to installing the Hadoop cluster and to show that when using ready-made distributions, such as Cloudera Hadoop, this takes very little time and effort. I recommend to continue acquaintance with Hadoop with the book of Tom White “Hadoop: The Definitive Guide”, there is an edition in Russian.

Working with Cloudera Hadoop on the example of specific usage scenarios will be discussed in the following articles of the cycle. The upcoming publication will be devoted to Flume, a universal tool for collecting logs and other data.

For those who can not comment on posts on Habré, we invite to our blog .

Source: https://habr.com/ru/post/198534/


All Articles