How to Write Parallel Programs: A Guide to the Perplexed
- Manage
- Copy
- Actions
- Export
- Annotate
- Print Preview
Choose the export format from the list below:
- Office Formats (1)
-
Export as Portable Document Format (PDF) using Apache Formatting Objects Processor (FOP)
-
- Other Formats (1)
-
Export as HyperText Markup Language (HTML)
-
Nicholas Carriero, David Gelernter
ACM Computing Surveys 21(3), pages 323-357
September 1989
We present a framework for parallel programming, based on three conceptual classes for understanding parallelism and three programming paradigms for implementing parallel programs. The conceptual classes are result parallelism, which centers on parallel computation of all elements in a data structure; agenda parallelism, which specifies an agenda of tasks for parallel execution; and specialist parallelism, in which specialist agents solve problems cooperatively. The programming paradigms center on live data structures that transform themselves into result data structures; distributed data structures that are accessible to many processes simultaneously; and message passing, in which all data objects are encapsulated within explicitly communicating processes. There is a rough correspondence between the conceptual classes and the programming methods, as we discuss. We begin by outlining the basic conceptual classes and programming paradigms, and by sketching an example solution under each of the three paradigms. The final section develops a simple example in greater detail, presenting and explaining code and discussing its performance on two commercial parallel computers, an 18-node shared-memory multiprocessor, and a 64-node distributed-memory hypercube. The middle section bridges the gap between the abstract and the practical by giving an overview of how the basic paradigms are implemented. We focus on the paradigms, not on machine architecture or programming languages: The programming methods we discuss are useful on many kinds of parallel machine, and each can be expressed in several different parallel programming languages. Our programming discussion and the examples use the parallel language C-Linda for several reasons: The main paradigms are all simple to express in Linda; efficient Linda implementations exist on a wide variety of parallel machines; and a wide variety of parallel programs have been written in Linda. |
Journals & Series
- ACM Computing Surveys (CSUR)
Publications / Personal
Publications / Views
Home
— clouds
tags | authors | editors | journals
— per year
2023 | 2022 | 2021 | 2020 | 2019 | 2018 | 2017 | 2016 | 2015 | 2014–1927
— per sort
in journal | in proc | chapters | books | edited | spec issues | editorials | entries | manuals | tech reps | phd th | others
— per status
online | in press | proof | camera-ready | revised | accepted | revision | submitted | draft | note
— services
ACM Digital Library | DBLP | IEEE Xplore | IRIS | PubMed | Google Scholar | Scopus | Semantic Scholar | Web of Science | DOI
Publication
— authors
Nicholas Carriero, David Gelernter
— status
published
— sort
article in journal
— publication date
September 1989
— journal
ACM Computing Surveys
— volume
21
— issue
3
— pages
323-357
— address
New York, NY, USA
URLs
identifiers
— DOI
— print ISSN
0360-0300