Abstract
Highly parallel machines needed to solve compute-intensive scientific applications are based on the distribution of physical memory across the compute nodes. The drawback of such systems is the necessity to write applications in the message passing programming model. Therefore, a lot of research is going on in higher-level programming models and supportive hardware, operating system techniques, languages. The research direction outlined in this article is based on shared virtual memory systems, i.e., scalable parallel systems with a global address space which support an adaptive mapping of global addresses to physical memories. We introduce programming concepts and program optimizations for SVM systems in the context of the SVM-Fortran programming environment which is based on a shared virtual memory system implemented on Intel Paragon. The performance results for real applications proved that this environment enables users to obtain a similar or better performance than by programming in HPF.
Original language | English |
---|---|
Pages (from-to) | 383-400 |
Number of pages | 18 |
Journal | Parallel Computing |
Volume | 24 |
Issue number | 3-4 |
DOIs | |
State | Published - May 1998 |
Externally published | Yes |
Keywords
- Distributed memory computers
- Language constructs for data locality optimization
- Parallel programming models
- Performance analysis tools
- Scientific computing
- Shared virtual memory