Top 39 Hadoop Cluster Interview Questions You Must Prepare 02.Mar.2024

  1. Click on and then click on OR
  2. Write sudo hdfs (press enter), su-hdfs (press enter), /etc/init.d/ha (press enter) and then /etc/init.d/hadoop-0.20-namenode start (press enter).

In Fully Distributed mode, the clusters range from a few nodes to 'n' number of nodes. It is used in production environments, where we have thousands of machines in the Hadoop cluster. The daemons of Hadoop run on these clusters. We have to configure separate masters and separate slaves in this distribution, the implementation of which is quite complex. In this configuration, Namenode and Datanode runs on different hosts and there are nodes on which task tracker runs. The root of the distribution is referred as HADOOP_HOME.

If you have to look for Namenode in the browser, you don’t have to give localhost: 8021, the port number to look for Namenode in the browser is 50070.

To change from SU to Cloudera just type exit.

Secure Socket Shell or SSH is a password-less secure communication that provides administrators with a secure way to access a remote computer and data packets are sent across the slave. This network protocol also has some format into which data is sent across. SSH communication is not only between masters and slaves but also between two hosts in a network.  SSH appeared in 1995 with the introduction of SSH - @Now SSH 2 is in use, with the vulnerabilities coming to the fore when Edward Snowden leaked information by decrypting some SSH traffic.

There are 3 configuration files in Hadoop:


hdfs: //localhost:9000    


 dfs.replication 1


 mapred.job.tracker local host: 9001 

Like we do in Windows, DFS is formatted for proper structuring of data. It is not usually recommended to do as it format the Namenode too in the process, which is not desired.

The Hadoop core uses Shell (SSH) to launch the server processes on the slave nodes. It requires password-less SSH connection between the master and all the slaves and the Secondary machines.

Yes, we can have multiple entries in the Master files.

Masters contain a list of hosts, one per line, that are to host secondary namenode servers. Slaves consist of a list of hosts, one per line, that host datanode and task tracker servers.

Yes, we can definitely do that.  Once we become familiar with the Apache Hadoop environment, we can create a cluster from scratch.

The command mapred.job.tracker is used by the Job Tracker to list out which host and port that the MapReduce job tracker runs at. If it is "local", then jobs are run in-process as a single map and reduce task.

In practicality, Ubuntu and Red Hat Linux are the best Operating Systems for Hadoop. On the other hand, Windows can be used but it is not used frequently for installing Hadoop as there are many support problems related to it. The frequency of crashes and the subsequent restarts makes it unattractive. As such, Windows is not recommended as a preferred environment for Hadoop Installation, though users can give it a try for learning purposes in the initial stage.

We need a password-less SSH in a Fully-Distributed environment because when the cluster is LIVE and running in Fully Distributed environment, the communication is too frequent. The job tracker should be able to send a task to task tracker quickly.

The HDFS Client does not decide. It is already specified in one of the configurations through which input split is already configured.

This is a default configuration of Hadoop that you have to download from Cloudera or from eureka’s Dropbox and the run it on your systems. You can also proceed with your own configuration but you need a Linux box, be it Ubuntu or Red hat. There are installations steps present at the Cloudera location or in Eureka’s Drop box. You can go either ways.

Slaves and Masters are used by the startup and the shutdown commands.

Yes, fs.mapr.working.dir it is just one directory.

When the job tracker is down, it will not be in functional mode, all running jobs will be halted because it is a single point of failure. Your whole cluster will be down but still Namenode will be present. As such the cluster will still be accessible if Namenode is working, even if the job tracker is not up and running. But you cannot run your Hadoop job.

Namenode is the main point which keeps all the metadata, keep tracks of failure of datanode with the help of heart beats. As such when a namenode is down, your cluster will be completely down, because Namenode is the single point of failure in a Hadoop Installation.

To check whether Namenode is working or not, use the command /etc/init.d/hadoop- 0.20-namenode status or as simple as jps’.

In stand-alone or local mode there are no Hadoop daemons running,  and everything runs on a single Java process. Hence, we don't get the benefit of distributing the code across a cluster of machines. Since, it has no DFS, it utilizes the local file system. This mode is suitable only for running MapReduce programs by developers during various stages of development. Its the best environment for learning and good for debugging purposes.

The three modes in which Hadoop can be run are:

  1. Standalone (local) mode - No Hadoop daemons running, everything runs on a single Java Virtual machine only.
  2. Pseudo-distributed mode - Daemons run on the local machine, thereby simulating a cluster on a smaller scale.
  3. Fully distributed mode - Runs on a cluster of machines.

If a Namenode has no data it cannot be considered as a Namenode. In practical terms, Namenode needs to have some data.

Cloudera and Apache have the same directory structure. Hadoop is installed in cd /usr/lib/hadoop-0.20/.

Hadoop-metrics Properties is used for ‘Reporting‘purposes. It controls the reporting for hadoop. The default status is ‘not to report‘.

SSH is a secure shell communication, is a secure protocol and the most common way of administering remote servers safely, relatively very simple and inexpensive to implement. A single SSH connection can host multiple channels and hence can trfer data in both directions. SSH works on Port No. 22, and it is the default port number. However, it can be configured to point to a new port number, but its not recommended. In local host, password is required in SSH for security and in a situation where password less communication is not set.

In Pseudo-distributed mode, each Hadoop daemon runs in a separate Java process, as such it simulates a cluster though on a small scale. This mode is used both for development and QA environments. Here, we need to do the configuration changes.

The three main hdfs-site.xml properties are:

  1. which gives you the location on which metadata will be stored and where DFS is located – on disk or onto the remote.
  2. which gives you the location where the data is going to be stored.
  3. Fs.checkpoint.dir which is for secondary Namenode.

"fsck" is File System Check. FSCK is used to check the health of a Hadoop Filesystem. It generates a summarized report of the overall health of the filesystem. 

Usage:  hadoop fsck /

Cloudera is the leading Hadoop distribution vendor on the Big Data market, its termed as the next-generation data management software that is required for business critical data challenges that includes access, storage, management, business analytics, systems security, and search.

Hadoop core is specified by two resources. It is configured by two well written xml files which are loaded from the classpath:

  1. Hadoop-default.xml - Read-only defaults for Hadoop, suitable for a single machine instance.
  2. Hadoop-site.xml - It specifies the site configuration for Hadoop distribution. The cluster specific information is also provided by the Hadoop administrator.

The port number for Namenode is ’70′, for job tracker is ’30′ and for task tracker is ’60′.

To come out of the insert mode, press ESC,

Type: q (if you have not written anything) OR

Type: wq (if you have written anything in the file) and then press ENTER.

/etc /init.d specifies where daemons (services) are placed or to see the status of these daemons. It is very LINUX specific, and nothing to do with Hadoop.

Spill factor is the size after which your files move to the temp file. Hadoop-temp directory is used for this. Default value for io.sort.spill.percent is 0.8@A value less than 0.5 is not recommended.

Numerous companies are using Hadoop, from large Software Companies, MNCs to small organizations. Yahoo is the top contributor with many open source Hadoop Softwares and frameworks. Social Media Companies like Facebook and Twitter have been using for a long time now for storing their mammoth data. Apart from that Netflix, IBM, Adobe and e-commerce websites like Amazon and eBay are also using multiple Hadoop technologies.

This file contains some environment variable settings used by Hadoop; it provides the environment for Hadoop to run. The path of JAVA_HOME is set here for it to run properly. file is present in the conf/ location. You can also create your own custom configuration file conf/, which will allow you to override the default Hadoop settings.