Computer Science, asked by Vasmeen8561, 1 year ago

During the replication process, a block of data is written to all specified datanodes in parallel. True or false?

Answers

Answered by HemantRathore1234
6
Hey...
PLEASE MARK ME AS BRAINLIEST.
Please
Please
Please
Answer = True.
Answered by Arslankincsem
4

True.


HDFS (Hadoop Distributed File System) is a secure way to store large quantities of information as data blocks in a distributed environment.


The blocks also help in fault tolerance. The default replication factor is three and is configurable.


Therefore, each block can be replicated three times and it can be stored on different DataNodes.


While storing a file of 128 MB in HDFS by using a default configuration, one may end up securing a space of 384 MB as the blocks may be replicated thrice.


Hence, a block of data is written to all specified data nodes in parallel.

Similar questions