High-level programming of massively parallel computers based on shared virtual memory

Research output: Contribution to journalArticlepeer-review

Abstract

Highly parallel machines needed to solve compute-intensive scientific applications are based on the distribution of physical memory across the compute nodes. The drawback of such systems is the necessity to write applications in the message passing programming model. Therefore, a lot of research is going on in higher-level programming models and supportive hardware, operating system techniques, languages. The research direction outlined in this article is based on shared virtual memory systems, i.e., scalable parallel systems with a global address space which support an adaptive mapping of global addresses to physical memories. We introduce programming concepts and program optimizations for SVM systems in the context of the SVM-Fortran programming environment which is based on a shared virtual memory system implemented on Intel Paragon. The performance results for real applications proved that this environment enables users to obtain a similar or better performance than by programming in HPF.

Original languageEnglish
Pages (from-to)383-400
Number of pages18
JournalParallel Computing
Volume24
Issue number3-4
DOIs
StatePublished - May 1998
Externally publishedYes

Keywords

  • Distributed memory computers
  • Language constructs for data locality optimization
  • Parallel programming models
  • Performance analysis tools
  • Scientific computing
  • Shared virtual memory

Fingerprint

Dive into the research topics of 'High-level programming of massively parallel computers based on shared virtual memory'. Together they form a unique fingerprint.

Cite this