The Parallel Applications Modelling research line is centered on the study of high performance solutions for problems originated in other areas of knowledge (Biology, Geology, Physics, etc). The development of high performance solutions for problems requires a grasp in many levels of specialized knowledge. Besides an analysis of the algorithmic complexity of the problem to be parallelized, it is also necessary to have knowledge about the developing environments and program tests, parallel programming techniques, performance evaluations and high performance architectures.
The research line HSPA (High-level and Structured Parallelism Abstractions) aims to create programming interfaces for the user/programmer who is not able in dealing with parallel programming paradigm. The idea is to offer a higher level of abstraction, where the performance of applications is not compromised. The interfaces developed in this research line go toward specific domains that can later extend to other areas. The scope of the study is broad as regards the use of technologies for the development of the interface and parallelism.
This research line aims to study and propose software solutions to reduce energy consumption in high-performance environments. Target environments of research developed in this line are: clusters, grids, and clouds. We can highlight as examples of proposed solutions the creation of scheduling policies as well as the use of hardware techniques, aimed to reduce energy consumption in running large applications.
Adriano Vogel successfully defended his Master Thesis “Adaptive Degree of Parallelism for the SPar Runtime” advised by Dr. Luiz Gustavo Leão Fernandes and Dr. Dalvan Griebler, at the School of Te
VII Nvidia GPU Workshop was great, which was organized by Nvidia, PUCRS, and GMAP. The event happened Friday 10/11/2017 at PUCRS. Since the beginning, there were a lot of people...
This will be the VII NVIDIA GPU WORKSHOP, being for the first time happening in the south of Brazil. It is an event for developers working with Artificial Intelligence, Deep Learning, High-Performance