3 minute read

Jun Yang, PhD

1140 Benedum Hall | 3700 O’Hara Street | Pittsburgh, PA 15261 P: 412-624-9088

juy9@pitt.edu Professsor

Computer Engineering Program

Emerging Non-volatile Memory Technologies

Data centers in the U.S. consume well over 100 billion kilowatt hours of energy yearly, according to U.S. Department of Energy. One of the most power hungry parts of data centers has traditionally been the processor. With technology scaling and applications’ growing demand for more memory, the majority of power consumption of data centers is shifting from the processor to main memory. Today’s memory technology cannot keep pace with this change and is reaching its limit in power consumption and density at data-center scales. This project aims to solve the energy problem posed by memory. Rather than relying solely on DRAM, our approach integrates emerging non-volatile memories, e.g. Phase Change Memory, Spin-Torque-Transfer Magnetic RAM, to construct a high-capacity and energy-efficient memory system. The research is ambitious with fundamental and applied contributions in the design and development of energy-efficient computer servers. The fundamental research includes: a new integrated memory architecture that manages hybrid memory resources; novel techniques for energy, performance, endurance and fault tolerant management; and a new hybrid main memory controller. The applied contributions are our tools that we develop including simulation and analytic models and actual software/emulated hardware system.

Nanophotonic Interconnection Network

Electrical on-chip networks are hitting great challenges in power, latency and bandwidth density with technology scaling. Such challenges are especially pronounced in the era of multi-core computing where high bandwidth, low power, and low-latency global transmission are required. For those reasons, and recent breakthroughs in nanophotonic devices, optical interconnection is again considered as a potential on-chip network for future many-core microprocessors. However, there are still fundamental limitations in nanophotonic networks that hinder their success in competing with their electrical counterpart. This project aims at future optical Networks-on-Chip, and target the bandwidth, performance, power/energy, and reliability, all being fundamental problem of on-chip optics. We use complementary solutions, joining devicelevel innovations, architectural novelties

Power and Thermal Management for Future Microprocessors

Many mobile embedded systems are designed to be small and compact to favor portability. As the user demand expands for more powerful, versatile, and integrated solutions, the designers endeavor to pack more and more devices into the small embedded form factors thanks to the technology advancement. In parallel with this trend, the microprocessor technology has evolved into an era of integrating multiple cores in one die, a.k.a. chip multiprocessor or CMP, to support concurrent execution of multiple applications. The recent promotion of the 3D stacking technology (stack multiple die vertically) further enables smaller chip footprints and packaging at a higher transistor density. As can be foreseen, the marriage of the future embedded systems and future microprocessors brings forth concerns in increased power density which renders the thermal management a key challenge for embedded processors. This project recognizes the incessant power and thermal challenges and the limitations of the current approaches, and promotes a new suite of solutions at a higher level that address the drawbacks of existing solutions. It is important to raise the power and thermal awareness to a high level, such as the embedded operating system, where application thermal behavior are more and operating system originalities into a systematic framework. The objective is to make on-chip nanophotonics a more practical and applicable technology for future many-core microporcessors.

explicit. The proposed techniques leverage the natural discrepancies in the power and thermal behavior among different application threads, and allocate, migrate, or schedule them among different cores. The goal is to minimize power consumption and thermal violations through proactive thread management at a high level.

This article is from: