AN ANALYSIS OF PARALLEL PROCESSING AT MICROLEVEL

Page 1

International Engineering Journal For Research & Development E-ISSN No: 2349-0721 Volume 1: Isuue 1

AN ANALYSIS OF PARALLEL PROCESSING AT MICROLEVEL Vina S. Borkar 1 Dept. of Computer Science and Engineering St. Vincent Pallotti College of Engineering and Technology, Nagpur, India vinaborkar@gmail.com

------------------------------------------------------------------------------------------------------------------------

Abstract:To achieve performance processors rely on two forms of parallelism: instruction level parallelism (ILP) and thread level parallelism (TLP).ILP and TLP are fundamentally identical: they both identify independent instructions that can execute in parallel and therefore can utilize parallel hardware.ILP include, In this paper we begin by examining the issues (dependencies, branch prediction. window size, latency) on ILP from program structure. and give the use of thread-level parallelism as an alternative or addition to instruction-level parallelism. This paper explores parallel processing on an alternative architecture, simultaneous multithreading (SMT), which allows multiple threads to compete for and share all of the processor’s resources every cycle. The most compelling reason for running parallel applications on an SMT processor is its ability to use thread-level parallelism and instruction-level parallelism interchangeably. By permitting multiple threads to share the processor’s functional units simultaneously, the processor can use both ILP and TLP to accommodate variations in parallelism. Keywords-TLP, ILP, branch prediction, corse-grain, SMT.

I. Introduction Instruction-level Parallelism (ILP) is a family of processor and compiler design techniques that speed up execution by causing individual machine operations, such as memory loads and stores, integer additions and floating point multiplications, to execute in parallel [1].This technique is that like circuit speed improvements, but unlike traditional multiprocessor parallelism and massive parallel processing, they are largely transparent to users. ILP is also called as a technique called pipelining. Pipelining breaks down a processor into multiple stages and creates a pipeline that instructions pass through. This pipeline functions much like an assembly line. An instruction enters at one end, passes through the different stages of the pipe, and exits at the other end. VLIWs and superscalar’s are examples of processors that derive their benefit from instruction-level parallelism, and software pipelining and trace scheduling are example software techniques that expose the parallelism that these processors can use. A superscalar machine is one that can issue multiple independent instructions in the same cycle. A super pipelined machine issues one instruction per cycle, but the cycle time is set much less than the typical

www.iejrd.in

Page 1


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.
AN ANALYSIS OF PARALLEL PROCESSING AT MICROLEVEL by IEJRD - Issuu