JobTracker is master
node in Map Reduce part of Hadoop architecture.Client submits the job to the JobTracker and then Map Reduce task is performed for calculating the actual result. MapReduce processing in Hadoop is handled by the JobTracker and TaskTracker
daemons. JobTracker places the jobs in a queue. A scheduler program in the JobTracker picks up the jobs from it. The
number of map tasks depends on the number of input splits on the data file. The
number of reduce tasks depends on the value defined in the, setNumReduceTasks()
method of the job object.JobTracker can not submit the result to the client until all the map and reduce task is completed. JobTracker
maintains the work of the TaskTrackers(Slave
Nodes). It decides which TaskTracker will perform the map task based on its
proximity to the data and which will perform reduce task. TaskTracker send
heartbeat signal in every few second to JobTracker for informing that they are
alive. If JobTracker don’t receive any heartbeat from any TaskTracker, it is
considered as dead .
Note: The JobTracker runs on NameNode
This information you provided in the blog that is really unique I love it!! Thanks for sharing such a great blog. Keep posting..
ReplyDeleteHadoop training
Hadoop training institute
Hadoop Course