Proc. of Int. Conf. on Advances in Electrical & Electronics 2010
Analysis and Minimization Technique for Leakage Reduction in Two Input NOR gate 1Vaibahav
Neema ,2Sanjiv Tokekar
1Lecturer 2Professor
Department of Electronics & Telecommunication IET Devi Ahilya University, Indore vaneema.iet, 1 2sanjivtokekar{@dauniv.ac.in}
Abstract :-Leakage current in CMOS circuit technology is a major concern for technology node below to the 100nm as it drains the battery even when a circuit is completely idle. this paper proposed a novel approach to minimize leakage current in CMOS circuit during the off state (or Standby Mode , Sleep Mode).In this paper we first present a novel leakage reduction technique over Two Input NOR gate and then compare it with well established leakage reduction technique. Our proposed leakage reduction technique control leakage current for all possible input vector applied to Two Input NOR gate.
achieve high levels of performance (speed) and utilize less area. However, they require two operation phases: precharging and evaluation. They cannot be scaled easily due to their low noise immunity, and require keeper circuits to restore logic levels. On the other hand, fully Complementary Metal Oxide Semiconductor (CMOS) styles are usually robust, dissipate low power, have fully restored logic levels, and are easily scalable. II. APPLICATION aREA
Experiment conducted using TANNER EDA tool for 90 nm Predictive Technology Model file with 1V supply voltage. Result shows that successfully 90 % of static power reduction is obtain with small delay penalty using proposed technique.
Cell phones and pocket PCs have burst-mode type integrated circuits, which for the majority of the time are in an idle state. For such circuits, it is acceptable to have leakage during the active mode. However, during the idle state it is extremely wasteful to have leakage, as power is unnecessarily consumed with no useful work being done. Given the present advances in power management techniques [6], leakage loss is a major concern in deepsubmicron technologies, as it drains the battery, even when a circuit is completely idle. Power dissipation of highperformance processors and servers is predicted to increase linearly over the next decade. The 2006 International Technology Roadmap for Semiconductors [7] projects power dissipation to reach 198 Watts in the year 2008 and reach 300 Watts by the year 2018. Multi-core integrated processors deliver significantly greater compute power through concurrency, offer greater system density and run at lower clock speeds, thereby reducing thermal dissipation and power consumption to an extent. Leakage power will contribute towards the majority of the total power consumption for such servers fabricated with deepsubmicron technologies. Figure 1 shows subthreshold leakage power trends [2] in accordance with Moore’s law. Clearly, with deep-submicron processes, chips will leak excessive amounts of power. By the year 2020, leakage is expected to increase 32 times per device [6]. This is a major challenge in scaling down designs, and it motivates the need for efficient leakage control mechanisms to minimize power overheads in circuits designed with deepsubmicron technologies.
Index Terms: Stand by Mode, Idle Mode, deep submicron
I. INTRODUCTION: There is a growing need for high-performance and lowpower systems, especially for portable and batterypowered applications. Battery-powered electronic systems form the backbone of the growing market of mobile handheld devices used all over the world today. In order to maximize battery life, the tremendous computational capacity of portable devices such as notebook computers, personal communication devices (cell phones, pocket PCs, PDAs), hearing aids and implantable pacemakers has to be realized with very low power requirements [1]. With miniaturization and the growing trend towards wireless communication, power dissipation has become a very critical design metric. The longer the battery lasts, the better. Therefore current reduction is main concern for this type of applications. A number of such methods have been proposed to address this problem [2]–[3]. However, with continued process scaling, lower supply voltages necessitate reduction of threshold voltages to meet performance goals and result in a dramatic increase in subthreshold leakage current. Minimizing power consumption is currently an extremely challenging area of research, especially with on-chip devices doubling every two years [4] [5]. Design styles [5] play a key role in determining the power dissipation, performance and supply/threshold scalability of a circuit. Dynamic circuits 160 © 2010 ACEEE DOI: 02.AEE.2010.01.136