An Efficient and Secure Otp Enabled File Sharing Service Over Big Data Environment

Page 1

International Journal of Computer Science Engineering and Information Technology Research (IJCSEITR) ISSN (P): 2249–6831; ISSN (E): 2249–7943 Vol. 10, Issue 1, Jun 2020, 61-80 © TJPRC Pvt. Ltd

AN EFFICIENT AND SECURE OTP ENABLED FILE SHARING SERVICE OVER BIG DATA ENVIRONMENT CHETAN BALAJI, D. NANDA KISHORE, CHETAN N. S & MANOJ. M Student, Department of ISE RNS Institute of Technology Bangalore, India ABSTRACT The high volume, velocity, and variety of data being produced by diverse scientific and business domains challenge standard solutions of data management, requiring them to scale while ensuring security and dependability. A fundamental problem is, where and how to store the vast amount of data that is being continuously generated. Private infrastructures are the first option for many organizations. However, creating and maintaining data centers is expensive, requires specialized workforce, and can create hurdles to sharing. Conversely, attributes like cost- effectiveness, ease of use, and (almost) infinite scalability make public cloud services natural candidates to address data storage problems. File sharing has been an essential part of this century. Using various applications, files can be shared to large number of users. For the purpose of storage, the Hadoop Distributed File System (HDFS) can be used. HDFS is mainly

methods like removable media, servers or computer network, World Wide Web based hyperlink documents are widely used. In this proposed work, the files are merged using MapReduce programming model on Hadoop. This process improves the performance of Hadoop by rejecting the files which are larger than the size of Hadoop and reduces the memory size required by the NameNode. KEYWORDS: HDFS, MapReduce, File Sharing & Hadoop

Original Article

used for the unstructured data analysis. The HDFS handles large size of files in a single server. Common sharing

Received: May 02, 2020; Accepted: May 22, 2020; Published: Jun 01, 2020; Paper Id.: IJCSEITRJUN20207

1. INTRODUCTION Map Reduce The main concept behind the Hadoop file system is Map Reduce. Map Reduce is a concept which was developed by Google to sort the large data which is in petabytes by forming clusters of this data. Map Reduce mainly has two parts i.e. - Map and Reduce. Map is a transformation step in which the singular records are handled in parallel. Reduce is a summarization step in which all the related records are handled in a single entity. The problem statement proposed here in this paper is to deal with the huge measure of little size records as it requires greater investment (for name hub). The issue here is to enhance the execution of Hadoop in treatment of little records to accomplish the coveted yield by frame work by utilizing productive consolidation. 3Vs (volume, variety and velocity) are three defining properties or dimensions of big data. Volume refers to the amount of data, variety refers to the different types of data and velocity refers to the speed of data processing. According to the 3Vs model, the challenges of big data management result from the expansion of all three properties, rather than just the volume alone -- the sheer amount of data needs to be managed.

www.tjprc.org

editor@tjprc.org


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.
An Efficient and Secure Otp Enabled File Sharing Service Over Big Data Environment by Transtellar Publications - Issuu