Getting Started with Hadoop: Fundamentals & MapReduce


Overview/Description
Expected Duration
Lesson Objectives
Course Number
Expertise Level



Overview/Description

In this course, learners will explore the theory behind big data analysis using Hadoop, and how MapReduce enables parallel processing of large data sets distributed on a cluster of machines. Begin with an introduction to big data and the various sources and characteristics of data available today. Look at challenges involved in processing big data and options available to address them. Next, a brief overview of Hadoop, its role in processing big data, and the functions of its components such as the Hadoop Distributed File System (HDFS), MapReduce, and YARN (Yet Another Resource Negotiator). Explore the working of Hadoop's MapReduce framework to process data in parallel on a cluster of machines. Recall steps involved in building a MapReduce application and specifics of the Map phase in processing each row of the input file's data. Recognize the functions of the Shuffle and Reduce phases in sorting and interpreting the output of the Map phase to produce a meaningful output. To conclude, complete an exercise on the fundamentals of Hadoop and MapReduce.



Expected Duration (hours)
1.1

Lesson Objectives

Getting Started with Hadoop: Fundamentals & MapReduce

  • Course Overview
  • describe what big data is and list the various sources and characteristics of data available today
  • recognize the challenges involved in processing big data and the options available to address them such as vertical and horizontal scaling
  • specify the role of Hadoop in processing big data and describe the function of its components such as HDFS, MapReduce, and YARN
  • identify the purpose and describe the workings of Hadoop's MapReduce framework to process data in parallel on a cluster of machines
  • recall the steps involved in building a MapReduce application and the specific workings of the Map phase in processing each row of data in the input file
  • recognize the functions of the Shuffle and Reduce phases in sorting and interpreting the output of the Map phase to produce a meaningful output
  • recognize the techniques related to scaling data processing tasks, working with clusters, and MapReduce and identify the Hadoop components and their functions
  • Course Number:
    it_dshpfddj_01_enus

    Expertise Level
    Beginner