This post shows what is Safemode in Namenode and what are the configurations for the safemode in Hadoop. You will also see the commands available to enter and leave the safemode explicitly.
When the Namenode is started it loads the file system state into memory initially from the fsimage and then apply the recent edits using the edit log file.
Then Namenode waits for the Datanodes to report their blocks. Note that information about the location of the blocks is kept in memory by Namenode, it is not stored in any file. So Namenode needs time to get block locations from Datanodes. During this process of loading the file system using fsimage and edit log and Datanode sending the list of their blocks Namenode stays in Safemode.
Safemode in Hadoop
Safemode for the NameNode is essentially a read-only mode for the HDFS cluster, where it does not allow any modifications to file system or blocks.
It is important for Namenode to stay in Safemode during this whole process because not getting enough time to get the list of blocks from Datanode would result in Namenode start replicating the blocks prematurely though enough replicas already exist in the cluster.
Normally the NameNode leaves Safemode automatically after the DataNodes have reported that most file system blocks are available.
How does Namenode know when enough block reports have come to exit Safemode. For that you need to configure properties in hdfs-site.xml (If you don’t configure explicitly default values are used).
dfs.namenode.safemode.threshold-pct- Specifies the percentage of blocks that should satisfy the minimal replication requirement defined by dfs.namenode.replication.min. Values less than or equal to 0 mean not to wait for any particular percentage of blocks before exiting safemode. Values greater than 1 will make safe mode permanent. Default for this property is 99.9% which means Namenode exits the Safemode when 99.9% blocks in the file system meet the minimum replication level.
dfs.namenode.safemode.extension- Determines extension of safe mode in milliseconds after the threshold level is reached. Default is 30000 miliseconds (30 Seconds).
If the default values are taken then Namenode exits the Safemode when 99.9% blocks in the file system meet the minimum replication level plus an extension of 30 seconds.
There is also property for specifying the minimum number of Datanodes that are considered alive.
dfs.namenode.safemode.min.datanodes- Specifies the number of datanodes that must be considered alive before the name node exits safemode. Values less than or equal to 0 mean not to take the number of live datanodes into account when deciding whether to remain in safe mode during startup. Values greater than the number of datanodes in the cluster will make safe mode permanent. Default value for this property is 0.
HDFS commands for Safemode
Entering Safemode explicitly
HDFS could be placed in Safemode explicitly using following HDFS command-
hdfs dfsadmin -safemode enter
Leaving Safemode explicitly
If you want to force the Namenode to leave the Safemode. This command can be used when you get SafeModeException and the “Name node is in safe mode” message.
hdfs dfsadmin -safemode leave
Checking the Safemode status in Hadoop
You can check the NameNode front page to see whether Safemode is on or off.
From command line you can use the following command to check that-
hdfs dfsadmin -safemode get
That's all for the topic Namenode in Safemode. If something is missing or you have something to share about the topic please write a comment.
You may also like
- How to Fix Corrupt Blocks And Under Replicated Blocks in HDFS
- MapReduce Execution Internal Steps in YARN
- HDFS Data Flow – File Read And Write in HDFS
- Java Volatile Keyword With Examples
- Java Program to Find Common Element Between Two Arrays
- Checked Vs Unchecked Exception in Java
- Predicate Functional Interface Java Examples
- Spring @PostConstruct and @PreDestroy Annotation
No comments:
Post a Comment