Advanced search
Start date

Memory allocation and balancing techniques on NUMA machines

Grant number: 14/15523-1
Support type:Scholarships in Brazil - Master
Effective date (Start): March 01, 2015
Effective date (End): August 31, 2016
Field of knowledge:Physical Sciences and Mathematics - Computer Science
Cooperation agreement: Coordination of Improvement of Higher Education Personnel (CAPES)
Principal Investigator:Guido Costa Souza de Araújo
Grantee:Martin Ichilevici de Oliveira
Home Institution: Instituto de Computação (IC). Universidade Estadual de Campinas (UNICAMP). Campinas , SP, Brazil
Associated scholarship(s):15/12187-3 - Memory allocation and balancing techniques for numa machines, BE.EP.MS


The NUMA model (Non-Uniform Memory Architecture) has allowed a considerable increase on the scalability of parallel architectures. However, many of today's computer systems do not consider the different latencies between local and remote memory accesses, which can lead to large performance losses. The control by the programmer is costly and error prone, so an automatic memory distribution management system is critical. This project aims to implement a model that would control, independently from the programmer, the distribution between a program's local and remote memories. This model will be responsible for determining the program's memory access patterns, including being susceptible to variations on such pattern, distributing the memory pages between different nodes, in order to reduce the program's average latency. As such, the Linux balancing algorithm will serve as the foundation for this work, and shall be modified to acomodate an interface that empowers the application to communicate to the kernel the pages most likely to be accessed. The main idea behind this strategy lies in giving the application the power to foresee which pages shall be accessed more often. This approach, which combines information at the application level with the kernel's balancing algorithm, has not been deeply explored in the literature.With this model's implementation, we intend to improve the performance of programs that make heavy use of memory by placing memory pages next to nodes that that are operating on them. It is expected that the model will be flexible and easy to use, since it will not require user intervention. (AU)

Academic Publications
(References retrieved automatically from State of São Paulo Research Institutions)

Please report errors in scientific publications list by writing to: