Advanced search
Start date
Betweenand

Speculative techniques for reducing the memory bottleneck problem

Grant number: 16/18929-4
Support Opportunities:Scholarships abroad - Research Internship - Post-doctor
Start date: January 01, 2017
End date: December 31, 2017
Field of knowledge:Physical Sciences and Mathematics - Computer Science - Computer Systems
Principal Investigator:Rodolfo Jardim de Azevedo
Grantee:Lois Orosa Nogueira
Supervisor: Onur Mutlu
Host Institution: Instituto de Computação (IC). Universidade Estadual de Campinas (UNICAMP). Campinas , SP, Brazil
Institution abroad: Swiss Federal Institute of Technology Zurich, Switzerland  
Associated to the scholarship:14/03840-2 - Architectural support for programs speculative execution, BP.PD

Abstract

In this project we aim to tackle the growing disparity between processor and memory. DRAM technology is no longer scaling, and the increasing number of cores in processors is pushing up the pressure in main memory systems. There are several ways to reduce and hidethis problem. One way is to reduce the main memory latency by re-architecting DRAM, withnew memory controller schemes or by introducing new memory technologies that reduce the effective access latency of main memory. Other way is trying to hide the memory latency in superscalar processors, avoiding processor stalls by keeping the processor active with independent or speculative instructions. Examples of this are value prediction, runahead execution or instruction reuse. Another alternative is to improve caching techniques (more effective caches, cache compression, etc.) to increase the probability of cache hits. Finally, prefetching techniques predicts which lines will we requested in the future, and requests them before needed, also improving the cache hit ratio. At this stage in the project, and having this big and general purpose in mind, we believe speculative techniques have still great potential for improvement and impact with new ideas. Value speculation, runahead execution and prefetching are the techniques that we want to focus. Value speculation predicts the result of one instruction and speculatively execute the following dependent instructions using this value. Runahead continues the execution in processor stalls, with the aim of prefetching instructions and data. We want to study the relationship between value speculation, runahead execution and prefetching, to design a more effective and simple technique that gets the most of the three techniques with minimal hardware support. (AU)

News published in Agência FAPESP Newsletter about the scholarship:
More itemsLess items
Articles published in other media outlets ( ):
More itemsLess items
VEICULO: TITULO (DATA)
VEICULO: TITULO (DATA)

Scientific publications (4)
(References retrieved automatically from Web of Science and SciELO through information on FAPESP grants and their corresponding numbers as mentioned in the publications by the authors)
SADROSADATI, MOHAMMAD; EHSANI, SEYED BORNA; FALAHATI, HAJAR; AUSAVARUNGNIRUN, RACHATA; TAVAKKOL, ARASH; ABAEE, MOJTABA; OROSA, LOIS; WANG, YAOHUA; SARBAZI-AZAD, HAMID; MUTLU, ONUR. ITAP: Idle-Time-Aware Power Management for GPU Execution Units. ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, v. 16, n. 1, . (16/18929-4)
OROSA, LOIS; AZEVEDO, RODOLFO; MUTLU, ONUR. AVPP: Address-first Value-next Predictor with Value Prefetching for Improving the Efficiency of Load Value Prediction. ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, v. 15, n. 4, . (13/08293-7, 14/03840-2, 16/18929-4)
WANG, YAOHUA; TAVAKKOL, ARASH; OROSA, LOIS; GHOSE, SAUGATA; GHIASI, NIKA MANSOURI; PATEL, MINESH; KIM, JEREMIE S.; HASSAN, HASAN; SADROSADATI, MOHAMMAD; MUTLU, ONUR; et al. Reducing DRAM Latency via Charge-Level-Aware Look-Ahead Partial Restoration. 2018 51ST ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO), v. N/A, p. 14-pg., . (16/18929-4)
TAVAKKOL, ARASH; SADROSADATI, MOHAMMAD; GHOSE, SAUGATA; KIM, JEREMIE S.; LUO, YIXIN; WANG, YAOHUA; GHIASI, NIKA MANSOURI; OROSA, LOIS; GOMEZ-LUNA, JUAN; MUTLU, ONUR; et al. FLIN: Enabling Fairness and Enhancing Performance in Modern NVMe Solid State Drives. 2018 ACM/IEEE 45TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), v. N/A, p. 14-pg., . (16/18929-4)