During the replication process, a block of data is written to all specified datanodes in parallel. True or false?
Answers
Answered by
6
Hey...
PLEASE MARK ME AS BRAINLIEST.
Please
Please
Please
Answer = True.
PLEASE MARK ME AS BRAINLIEST.
Please
Please
Please
Answer = True.
Answered by
4
True.
HDFS (Hadoop Distributed File System) is a secure way to store large quantities of information as data blocks in a distributed environment.
The blocks also help in fault tolerance. The default replication factor is three and is configurable.
Therefore, each block can be replicated three times and it can be stored on different DataNodes.
While storing a file of 128 MB in HDFS by using a default configuration, one may end up securing a space of 384 MB as the blocks may be replicated thrice.
Hence, a block of data is written to all specified data nodes in parallel.
Similar questions