An Efficient Approach for Merging Small Files in HDFS Storage and Accessing Small Files by Using Name Node Method in Big Data

Main Article Content

J. Sirisha, et. al.

Abstract

Big Data is one of the most requested techniques in the modern world of software development. In Big Data, the treatment of distributed files is performed by the open-source software framework called Hadoop on the product hardware cluster. For the storage of Big Data, the Framework is considered the most powerful. The HDFS Name Node component is used to store all sorts of files, folders, and blocks or metadata. The HDFS is specially designed to handle large files, but this framework will not properly handle a large number of small files. Proposed systems introduce that how the Name Node memory overheads the data storage reduces the storage of the huge number of small-size files in the HDFS. This approach will be very helpful in understanding the memory consumption and workload in the Name Node reduces the distributed file system called Hadoop.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Article Details

How to Cite
et. al., J. S. . (2021). An Efficient Approach for Merging Small Files in HDFS Storage and Accessing Small Files by Using Name Node Method in Big Data. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 12(10), 4667–4673. Retrieved from https://www.turcomat.org/index.php/turkbilmat/article/view/5219
Section
Articles