Hello  All,

This is the last and Final step for Deployment of the Big Data Cluster where we will deploy all the hadoop nodes (Compute Datamaster, Compute Master, Worker Node Group, Client node Group).
You can read about the Node and there Working Description in this Post . Big Data Extension :Node Nomenclature, Description & Architecture

A-Fqj4CCcAI9I7Y

Lets start the Deployment.

1. Login to the vCenter WebClient and go to Big Data Extension Server plugin.

image

2.Click on the (+) Sign in Big Data Clusters Window and it will pop up wizard to create the Big Data cluster. Input the Name and select the distribution type and Click NEXT 

image 
3. Select Basic Hadoop Cluster as the deployment type. This mimics a traditional physical Hadoop deployment. Here is a description of the deployment types available:

  • Basic Hadoop Cluster: For simple Hadoop deployments for proof of concept projects and other small-scale data processing tasks.
  • Basic HBase Cluster: HBase clusters can contain JobTracker or Tasktracker nodes to run HBase MapReduce jobs.
  • Compute-only Hadoop Cluster: For running MapReduce jobs; they read data from external HDFS clusters, and don’t store data.
  • Compute Workers Only Cluster: If you already have a physical Hadoop cluster and want to do more CPU or memory intensive operations, you can increase the compute capacity by provisioning a workers only cluster. With the compute workers only clusters, you can “burst out to virtual.” Worker only clusters are not supported on Ambari and Cloudera Manager application managers.
  • HBase Only Cluster: contains only HBase Master, HBase RegionServer, and Zookeeper nodes, but not Namenodes and Datanodes. The advantage of having an HBase only cluster is that multiple HBase clusters can use the same external HDFS.  
  • Data-Compute Separation Hadoop Cluster: Allows you to separate the data and compute nodes, which allow control of where nodes are placed on your ESXi hosts. Also facilitates elastic scaling of compute nodes as shown later in this lab.
  • Customized Cluster: Allows creation of clusters using the same configuration file as previously created clusters. You can also edit the file to further customize the cluster configuration.

image

4. You Can go to Each and every Node and Modify the Size of them as per your environment needs.

image

5. you can Customize the resources as per your need and Datastore selection for vmdk and file placement from here.

image

6.Select the Topology and Network as Default

image

7. Select Cluster and click Next.

image

8.Fill the password which is going to be used in future for Cluster management

image

9.Review All the setting and click Finish

.image

10. Once you click Finish the Cluster Start the Deployment of your hadoop Cluster.

image

11. After Deployment completed you can View/Manage them in to your vCenter Environment.

image

This is the final Post in series of Big Data Extension and Hadoop Cluster Deployment. Hope you all enjoyed this series.

Introduction :What is VMware Big Data Extension
Big Data Extension :Node Nomenclature, Description & Architecture
Part 1-Deploy VMware Big Data Extension-Step By step
Part 2- Setup the Serengeti Server to Manage Big Data Extension Cluster
Part 3– Preparation for Big Data Cluster Deployment
Part 4– Final Big Data Cluster Deployment Step-By-Step

Leave a Reply