Tr 00106

Page 1

1 IDL - International Digital Library Of Technology & Research Volume 1, Issue 6, June 2017

Available at: www.dbpublications.org

International e-Journal For Technology And Research-2017

Fog Computing – Enhancing the Maximum Energy Consumption of Data Servers. Priyanka Chettiyar1, Prabadevi B2and Jeyanthi N3 School of Information Technology, Vellore Institute of Technology, Vellore, Tamil Nadu. priyanka.mannarsamy@vit.ac.in prabadevi.b@vit.ac.in njeyanthi@vit.ac.in

Abstract Fog Computing and IoT systems make use of end-user premises devices as local servers. Here, we are identifying the scenarios for which running applications from NDCs are more energy-efficient than running the same applications from MDC. With the complete survey and analysis of various energy consumption factors such as different flow-variants and time-variants with respect to the Network Equipment we found two energy consumption use cases and respective results. Parameters such as current Load, Pmax, Cmax, Incremental Energy etc evolved with respect to system structure and various data related parameters leading to the conclusion that the NDC utilizes relatively reduced factor of energy comparative to the MDC. The study reveals that NDC as a part of Fog makeweights the MDCs to accompany respective applications, especially in the scenarios where IoT based applications are used where end users are the source data providers and can maximize the server utilization.

Index Terms— Centralized Data Servers, Cmax, Energy expenditure, Fog Computing, Nano Data Servers, Pmax.

1.

INTRODUCTION

loud computing and respective cloud relative

Capplications are on increasing demand and growing

demand

and

growth

of

various

smart

devices

communicating and making the world more connected,

swiftly in this digital sector of technology. Studies

well known as IoT. Recent surveys have expressed the

until date reflects cloud computing as the highly energy

fact that soon nobody can stop IoT from transforming

efficient for processing any job instead of running it

the traditional technology to digital world rapidly. Cloud

locally. Nevertheless, when energy utilization evaluated

computing appeared where application services easily

with respect to network topology and some other factors

made available to end users as frameworks, platforms

such as power consumption due to interactive cloud

and softwares. Cloud computing, still cannot be termed

services at the end user side; energy consumption

as “A platform for all” as it lags various issues to meet

seemed to be varying with respect to various use cases.

the requirements of IoT applications.

The pervasiveness of universality for the increasing IDL - International Digital Library

1 |P a g e

Copyright@IDL-2017


2 IDL - International Digital Library Of Technology & Research Volume 1, Issue 6, June 2017

Available at: www.dbpublications.org

International e-Journal For Technology And Research-2017

Fog computing also known Fog networking or

any network architecture, various energy consumption

fogging, or Edge computing is a concept in which clients

models are put forward based on their content

or intermediate users near to end users can accumulate

distribution. Two types of network equipment are

ample amount of capacity in order to perform the same

studied as Shared network and Unshared network.

communication and provide similar services in way that

Shared network equipment is when many users share

is more efficient rather than controlled over the central

equipment and services. Unshared network equipment is

cloud servers. It can recognize any capacious cloud

a network where the equipment or services situated at

server or any big data structures, where accessing data

end users is shared by single user or to a limited fixed set

can be a troublesome task. To make computing possible

of users. Initially, a complete end-to-end network

in an end-to-end manner for any network topology

architecture is used in which all-necessary data required

where new services and required applications delivered

for processing from NDC and Central Data center is

more efficiently and easily to millions of smartly

present. As we are aware that the data or information is

interconnected devices, fog was introduced. The

processed and located in the data servers of the cloud

interconnected fog devices are mostly consisting of set-

storage, the attire need to understand the energy usage of

top-boxes, access points, roadside units, cellular base

data servers is focused. Since the data in Cloud services

stations, etc. A 3-level hierarchy formed in the process

is processed and stored in data centers, an obvious focus

of a complete end-to-end services delivery from cloud to

for studying energy consumption of Cloud services is the

smart devices.Thus, fog computing is nothing but an

data centers. Nonetheless, even the transport network

Intermediate node between the end user smart devices

which routes the end users to the cloud servers play a

and centralized cloud data centers extending the

visible role in energy utilization. Normally, when the

functionality of cloud computing in way that is more

end users access the cloud servers, a subtle amount of

flexible. Fog computing turning out to be more popular

energy

for enormous number of applications with respect to

improvement in energy consumption in the transport

IoT. Here, we often use a term as “Nano Data Servers�

network and end user smart devices will help improving

(NDC) which are nothing but small storage capacity

the performance of end NDC. The experimental results

servers, which are present in end user locations used for

show that the Nano data servers can obverse Centralized

inter-communication of data with its peers.We can state

Data Servers and reduce the energy consumption

that Fog Computing is a paradigm that brings cloud

for the appliances that can be easily migrated from cloud

computing at the edges of the network topology.

servers to NDC. The following figure explains broadly

In this work, we try to find out the different use cases in which when the application is running on NDC is more efficient than the centralized cloud server is.For

is

consumed.The

the fog node and its role.

statistics

reveal

that


3 IDL - International Digital Library Of Technology & Research Volume 1, Issue 6, June 2017

Available at: www.dbpublications.org

International e-Journal For Technology And Research-2017

the various implementations of fog respective to its heads and tails. A] Fog – IoT Platform. IoT brings more than a hazardous conception of endpoints, which is problematic in a few ways.[1] In this part, analysis of those interruptions is done, and a progressive appropriated engineering that stretches out from the edge of the system to the center nicknamed Fog Fig 1. Fog Computing.

Computing is proposed. Specifically, focused on another dimension precomputing to Big Data and Analytics by

Many more interesting features which fog computing made available to us are Knowledge of locationtracking

end

user

devices

helping

motion,

IoT: an enormously distributed no. of sources at the ends.

the

hierarchical interplays between fog, cloud and the end

B] Internet – Nano Data Centers.

user devices signifying how fog node gets local

The growing concern about Energy utilization in the

overview when global overview was possible only at

modern data centers, gave rise to the model of Nano

higher

modifiable

Data Centers(NaDa). [7] ISP-controlled home gateways

optimizations depending on client side network and

were used to facilitate computing services and storage as

applications, improved caching methodology, end user

well. It also forms a distributed architecture with peer-to-

smart devices knowledge etc

peer data center model. Video-on-Demand (VoD)

The key to handle and manage the analytics rapidly with

services used to verify the actual capability of NaDa.

the help of data provided by IoT applications made

We develop an energy consumption model for VoD in

possible by fog data processing.

traditional and in NaDa data centers and evaluate this

level,

real

time

computation,

model using a large set of empirical VoD access data. We find that even under the most pessimistic scenarios,

2.

RELATED SURVEY.

Fog computing and its services are rapidly growing in every other sector with a purpose adding to our global digital revenue. Let us have peek overview of

NaDa saves at least 20% to 30% of the energy compared to traditional data centers. These savings stem from energy-preserving properties inherent to NaDa such as the reuse of already committed baseline power on underutilized gateways, the avoidance of cooling costs,


4 IDL - International Digital Library Of Technology & Research Volume 1, Issue 6, June 2017

Available at: www.dbpublications.org

International e-Journal For Technology And Research-2017

and the reduction of network energy consumption because of demand and service co-localization in NaDa.

[D] Fog Computing Saving Energy. In this paper, a comparison of energy utilization of

C] Green Cloud computing: Balanced Energy

applications on both the servers is done and the results

Management.

shows that Nano data servers can save energy

Network-based cloud computing is rapidly expanding as

comparatively on a higher rate based on various system

an

designing factors[9]. The hopping rate also adds to little

alternative

to

conventional

office-based

computing[8]. As cloud computing becomes more

extent with the various factors.

widespread, the energy consumption of the network and

Also, found that some part of energy which is consumed

computing resources that underpin the cloud will grow.

currently can be saved by bringing few applications at

This is happening at a time when there is increasing

the Nano platform level.

attention being paid to the need to manage energy consumption

across

the

entire

information

and

E] Document Processing – Energy Consumption

communications technology (ICT) sector. While data

Cloud computing and cloud-based services are a rapidly

center energy use has received much attention recently,

growing sector of the expanding digital economy.

there has been less attention paid to the energy

Recent studies have suggested that processing a task in

consumption of the transmission and switching networks

the cloud is more energy-efficient than processing the

that are key to connecting users to the cloud. In this

same task locally [10].However, these studies have

paper, we present an analysis of energy consumption in

generally ignored the network transport energy and the

cloud computing. The analysis considers both public and

additional power consumed by end-user devices when

private clouds, and includes energy consumption in

accessing the cloud. In this paper, we develop a simple

switching and transmission as well as data processing

model to estimate the incremental power consumption

and data storage. We show that energy consumption in

involved in using interactive cloud services. We then

transport and switching can be a significant percentage

apply our model to a representative cloud-based word

of total energy consumption in cloud computing. Cloud

processing

computing can enable more energy-efficient use of

measurements that the volume of traffic generated by a

computing power, especially when the computing tasks

session of the application typically exceeds the amount

are of low intensity or infrequent. However, under some

of data keyed in by the user by more than a factor of

circumstances cloud computing can consume more

1000. This has important implications on the overall

energy than conventional computing where each user

power consumption of the service. We provide insights

performs all computing on their own personal computer

into the reasons behind the observed traffic levels.

(PC).

Finally, we compare our estimates of the power

application

and

observe

from

our


5 IDL - International Digital Library Of Technology & Research Volume 1, Issue 6, June 2017

Available at: www.dbpublications.org

International e-Journal For Technology And Research-2017

consumption with performing the same task on a low-

utilizing mist figuring. For an internet amusement utilize

power consuming computer. Our study reveals that it is

case, we found that the normal reaction time for a client

not always energy-wise to use the cloud. Performing

is enhanced by 20% when utilizing the edge of the

certain tasks locally can be more energy-efficient than

system in contrast with utilizing a cloud-just model. It

using the cloud.

was additionally watched that the volume of movement between the edge and the cloud server is lessened by

F] Architecture - IPTV networks

more than 90% for the utilization case. The preparatory

Another Energy utilization model of IPTV stockpiling

outcomes highlight the capability of haze processing in

and dissemination gives bits of knowledge into the ideal

accomplishing a

outline of a VoD organize.[11] Energy utilization is

highlights the advantages of incorporating the edge of

limited by repeating mainstream program material on

the system into the figuring biological community.

practical registering model and

servers near clients.

3.

G] Fog – Potential The

Internet

of

Things

(IoT)

could

PROPOSED WORK

empower

The highlighting commitment of this paper is

developments that upgrade the personal satisfaction, yet

that the use of IP lookup algorithm with the base of Fog

it produces exceptional measures of information that are

Computing. In this project, we propose very small

troublesome for customary frameworks, the cloud, and

servers known as “Nano data centers” or “Nano data

even edge registering to deal with. fog processing is

servers” abbreviated as (NDC) which play the role of

intended to defeat these impediments.12].

fogs sited in end-user premises for running the applications in a point-to-point fashion. We use a single

I] Fog – Feasibility

device e.g. a laptop or a desktop which plays the role of

As billions of gadgets get associated with the Internet, it

“Main data center” or “Main data server” which is

won't be manageable to utilize the cloud as a

specifically the centralized data server of our entire

concentrated server. The path forward is to decentralize

system.With the entire setup from cloud server to the

calculations far from the cloud towards the edge of the

end user devices connected in a hierarchical manner we

system nearer to the client.[13] This decreases the

try to identify various use cases in which the energy

idleness of correspondence between a client gadget and

consumption of the NDCs and MDCs are calculated

the cloud, and is the commence of 'mist processing'

based on the algorithm implemented “EC Computation

characterized in this paper. The point of this paper is to

Algorithm”.

highlight the practicality and the advantages in

Main Datacenter: Main Data center is the server where

enhancing the Quality-of-Service and Experience by

the applications are deployed. We show the MDC


6 IDL - International Digital Library Of Technology & Research Volume 1, Issue 6, June 2017

Available at: www.dbpublications.org

International e-Journal For Technology And Research-2017

configurations, load status, amount of energy it can consume in idle state, current state, no. of connections associated etc. If the Nano Data center threshold limit (maximum number of request can be handle) exceed, then the Main data center will process the request. It accepts requests redirected from all sources i.e different fogs It has the maximum threshold limit compares to the Nano Data Centers. E.g. Any localhost or cloud server browsed on any device such as laptops or desktops. Nano Datacenter: Nano Data center is the server where

Fig 2: System Architecture of Fog Computing.

the applications are deployed. If the IP address belongs to the region of Nano Data center and if the threshold limit is not exceeded then the Nano data center will process the client request, else it will be redirect the request to the Main Data center. Nano data centers have limited capacity. We calculate various parameters like MDC as mentioned above. It processes the requests until the limit exceeded and then redirects to the MDC as soon as its status turns from normal to overloaded. It has the low threshold limit compares to the Main Data Centers. E.g.

End-user

and diminish the degree of information related to the cloud for handling, examination and capacity. This is regularly done to enhance productivity; however, it might similarly be utilized for security and consistence reasons. Prominent fog processing applications incorporate savvy framework, shrewd city, keen structures, vehicle systems and programming characterized systems. The illustration fog originates from the meteorological term for a cloud

devices,

mobile-applications,

web-

applications, geo-satellite requesting device, location detector,

The main purpose of fogging is to augment productivity

raspberry-pi

toolkit,

etc.

end of the ground, similarly as fog focuses on the end of the system. With the help of this implementation we can see the realtime energy computations and power consumptions taking place basically in 2 scenarios:

1) System with 1 MDC and 2 NDCs The entire system consisting of a centralized data center and the Nano-data centers relying on main data center. The various parameters are calculated such as


7 IDL - International Digital Library Of Technology & Research Volume 1, Issue 6, June 2017

Available at: www.dbpublications.org

International e-Journal For Technology And Research-2017

Pmax, Cmax, current load for the MDC and NDCs

which utilizes the datagram's goal route to decide for

respectively.

each datagram the following hop, is along these lines vital to accomplish the datagram sending rates required.

2) System when all NDCs are transformed to MDC.

Additionally, the bundle may experience numerous

As soon as the threshold limit for the NDCs are hit the

switches before it achieves its goal. Consequently,

NDC are shifted as the MDC servers with the increased

diminish in postponement by small scale seconds brings

throughput and capacity. The entire system consisting of

about gigantic cut down in an opportunity to achieve the

a centralized data center as the nano-data centers

goal. IP address query is troublesome because it requires

working as MDCs. The various parameters are

a Longest Matching Prefix seek. Numerous query

calculated such as Pmax, Cmax, current load for all the

calculations are accessible to locate the Longest Prefix

MDCs

Matching; one such is the Elevator-Stairs Algorithm. Some top of the line routers has been actualized with

3.1 Algorithms and Techniques

equipment parallelism utilizing TCAM. In any case, TCAM is a great deal more costly regarding circuit

 Energy consumption algorithm

multifaceted nature and in addition control utilization. In

 IPlookup.

this manner, proficient algorithmic arrangements are

o

Modified Elevator Stairs Algorithm

 Web Services.

basically required to be executed utilizing system processors as minimal effort and cost solutions. Among the state-of-the-art algorithms for IP address lookup, a binary search based on a balanced tree is

1) Efficient IP Lookup Algorithm.

effective in providing a low-cost solution. To construct a balanced search tree, the prefixes with the nesting

As there is heavy internet traffic, the backend

relationship should be converted into completely

supportive routers impose the capability of transmitting

disjointed prefixes. We propose Small balanced tree

the in-direction packets at high gigabits/second speed.

using entry reduction for IPLookup algo. Take the

The IP address lookup thus comes into its role of high-

specified IP address.

speed networks packets transmission to destined routers 

Make respective segments.

To deal with gigabit-per-second movement rates, the

Take 1st segment and search at root level.

backend-supporting routers must have the capacity to

Speed Calculation – Array of the root level is

from source to destination. It is very challenging task.

forward a large number of datagrams every second on each of their ports. Quick IP address query in the routers,

considered and a lookup is applied directly. -

Consider the 1st segment node as 224.


8 IDL - International Digital Library Of Technology & Research Volume 1, Issue 6, June 2017

Available at: www.dbpublications.org

International e-Journal For Technology And Research-2017

-

-

-

-

-

Work with only 1 subtree with root as

by wide number of users at distant locations.

224.

We make a war file of the respective application and

If the data is very dense, we consider

deploy it over the workspace and the local host

entire array in all other levels.

server of various devices.

Probably, we require a dictionary for the

In this case the devices can be different laptops,

sub-levels of a kind.

mobile devices, raspberry-pi kits etc. All these

If the next segment is 201, you look up

devices should be connected to a same LAN so that

201 in the dictionary for the 223 nodes,

the requests should not be disrupted.

Now your possible list of candidates is

3) EC Computation Algorithm.

just 64K items (i.e. all IP addresses that are 223,201.x.x). -

Repeat the above process with the next 2 levels.

-

The result is that you can resolve an IP address in just 4 lookups: 1 lookup in an array, and 3 dictionary lookups

This structure is also very easy to maintain. Inserting a new address or range requires at most four lookups and adds.

Same with deleting. Updates can be done in-place, without having to rebuild the entire tree.

Take care read and update should not come under same instance.

No concurrent updating should happen while concurrent read can be accessed.

2) Web Services. 4) Flow Chart of EC Computation Algorithm We make use of some basic runtime applications which make the request of the data centers as they are deployed to various systems. The Web services can be any system based application used remotely


9 IDL - International Digital Library Of Technology & Research Volume 1, Issue 6, June 2017

Available at: www.dbpublications.org

International e-Journal For Technology And Research-2017

M_Energy 1 is the existing system. Nano data center can take the current load of only 11 Initially it can take amount of 5 units. (Pidle= 5 units). Totally incoming connections, it can take is 3, for each connection it takes the energy of 2 units. So, we are calculating the energy by using the formula;

Current Load = Pidle + CE = Pidle + Current Connection * E / Connection U Threshold is the total value for the energy it should be less than the current system. So, the Current load for the Servers are:

Current Load = Pidle + CE Nano1 = 5 + (3 * 2) = 11 Nano2 = 5 + (1 * 2) = 7 Main

= 50 + (100 *2) = 250

Fig : EC Computation Algorithm.

4.

RESULTS AND DISCUSSION.

NDCs consume a insignificant total of energy for around some apps by moving data in the vicinity to client side users and reducing the energy consumed over the transport network.

By the enhancement we can increase the values of the server as the main servers so in the second table we are using the PMAX is 600 for nano servers as well as main


10 IDL - International Digital Library Of Technology & Research Volume 1, Issue 6, June 2017

Available at: www.dbpublications.org

International e-Journal For Technology And Research-2017

servers, but the current connections we can increase to

In this thesis, we studied, analyzed and brought out some

100 by the same initial energy.

results which discusses the outcomes of fog computing over cloud computing. We examined that the energy consumption of the Nano-servers called as fogs was considerably less when the applications are brought down at the networks edge at the client side devices. A detailed comparison of the energy consumed at the NDC level and MDC level was evaluated with the outcome revealing NDCs consumed less energy than MDCs for the same task performed for both scenarios

Current Load =Pidle + CE Main1 = 50 + (3 * 2) = 61 Main3 = 50 + (1 * 2) = 52 Main2 = 50 + (100*2) = 250

6.

FUTURE SCOPE.

Despite the commitments of the present theory in energy utilization of Cloud processing also, Fog registering, there are various open research challenges that should be handled with a specific end goal to further

Pmax is the maximum energy of the Nano server it is taken by: Pidle * Current Connection * E / Connection

propel the range. Furthermore, the wide range of applications fog can handle can be evaluated. Subsequently, as the vitality

Cmax is the maximum energy for the connection, it is given by: Pidle * E / Connection

5.

CONCLUSION

utilization displaying and estimation strategies proposed in this proposal can be connected to PaaS and IaaS, it is profitable to study vitality utilization of PaaS and IaaS in end-client terminals, transport system and server farms.

Cloud computing has become the base of the new trend

Besides, our outcomes depend on vitality utilization of

transforming our digital sector in IT. Large scale or

uses amid utilize stage and we did not consider vitality

small scale ir enterprise customers everywhere the cloud

utilization of uses and administrations in all their years.

services are grooming to the roots due to its wide range

Explore considering an existence cycle point of view

of abilities and advantages profiting the business

would be required to look at the aggregate ecological

revenue. Due to increase in demand of cloud based

impression of the applications and administrations.

storage, applications and services, the network traffic, energy consumption and routing are the upcoming major concerns.


11 IDL - International Digital Library Of Technology & Research Volume 1, Issue 6, June 2017

Available at: www.dbpublications.org

International e-Journal For Technology And Research-2017

7.

[5]. Cristea, V., Dobre, C., Pop, F.(2013) “Context-

ACKNOWLEDGMENT

aware environment internet of things.” Internet of I would like to express my sincere gratitude to my advisor Prof. Prabadevi B and Prof. Jayenti N Dept. SITE SCHOOL VIT UNIVERSITY, VELLORE for

Things and Inter-cooperative Computational Technologies for Collective Intelligence Studies in Computational Intelligence, vol. 460, pp. 25–49

continuous support for the research paper survey and analysis. I am very obliged for their supportive role and thankful for inspiring me to study more over this subject. I am very glad to have such experience of

research

[6]. Haak, D.(2010) “ Achieving high performance in smart grid data management.” White paper from Accenture

paper writing in a way to reach the concepts [7].Green cloud computing: Balancing energy in processing, storage, and transport (2011)

8.

REFERENCES [8]. Vytautas Valancius, Nikolaos Laoutaris, Laurent Massoulié (2009) “ Greening the Internet with Nano

[1]. Flavio Bonomi, Rodolfo Milito, Preethi Natarajan

Data Centers”

and Jiang Zhu (2016) “Fog computing: A platform for internet of things and analytics”

[9].Fatemeh Jalali, Kerry Hinton , Robert Ayre , Tansu Alpcan , and Rodney S. Tucker “Fog Computing May

[2]. Bonomi. F., Milito, R., Zhu, J., Addepalli, S.(2012):

Help to Save Energy in Cloud Computing”

“Fog computing and its role in the internet of things.”

Centre for Energy-Efficient Telecommunications (CEET), The University of Melbourne, Australia.

[3]. Pao, L., Johnson, K.(2009) “A tutorial on the dynamics and control of wind turbines and wind farms”

[10]. Shanhe Yi, Cheng Li, Qun Li(2015) “A Survey of

In: American Control Conference.

Fog Computing: Concepts, Applications and Issues Department of Computer Science” College of William

[4]. Botterud, A., Wang, J. (2009) “Wind power

and Mary Williamsburg, VA, USA.

forecasting and electricity market operations.” In: International Conference of 32nd International

[11]. An Tran Thien, Ricardo Colomo(2016) “A

Association for Energy Economics (IAEE), San

Systematic literature review of Fog Computing.”

Francisco, CA


12 IDL - International Digital Library Of Technology & Research Volume 1, Issue 6, June 2017

Available at: www.dbpublications.org

International e-Journal For Technology And Research-2017

[12]. Amir Vahid Dastjerdi, Rajkumar Buyya, (2016)

[13]. (2015) “Fog Computing and the Internet of Things:

“Fog Computing: Helping the Internet of Things Realize

Extend the Cloud to Where the Things Are”

its Potential”

Cisco and/or its affiliates.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.