Average Reviews:
(More customer reviews)The title covers so many possibilities, from the level of processor architecture to grid computing, that no book could possibly cover the whole range. The selections found here surprised me, though: just a few topics, but remarkable depth in at least one of them.
The authors begin with a classic model of computing, the parallel RAM. This conceptual family includes shared-memory systems of many kinds, possibly with restrictions on concurrent reading and/or writing. Here, the emphasis is on data structures that optimize concurrent computation on linked structures. The next chapter covers 'sorting networks.' These consist of two-input, two-output devices that compute the min and max of the input, generally configured with fixed topology. Although classics of the computing canon (see Knuth), they still represent a potentially useful structure, e.g. when creating a fixed-latency median filter in hardware. These chapters seem to tie only loosely to the main substance of the book, which begins in chapter 3.
That begins with a few classic communication topologies, including rings, stars, meshes, and hypercubes. The authors present performance models of graduated sophistication, as well as some low-level operations on specific architectures (e.g., broadcast on a simple ring). Chapters 4 and 5 get to the kinds of algorithms that highly parallel processors are typically called on to handle: matrix operations, stencil algorithms, and solving systems of linear equations. These chapters stand out for their close coupling between problem decomposition and the communication topology involved. Chapters 6-8 discuss load balancing and task scheduling, both static and dynamic. Algorithms like LU decomposition on non-sparse systems have reasonably predictable performance and communication characteristics over the course of one computation, so static and dynamic algorithms are both mentioned.
Task scheduling receives especially thorough treatment, especially when different processors run at predictably different rates. I found the material clearly laid out, but baffling in one respect. Heterogeneous processors typically arise in irregular communication networks such at the Internet, i.e. networks quite unlike the ones discussed in ch. 3-5.
I can give only conditional recommendation of this book, depending on the reader's interests. It discussion of topology-dependent communication and problem partitioning contains good material, task scheduling is exceptional, and the problems in each chapter seem well designed for reinforcing the content of each chapter. The book's idiosyncratic choices of subjects limit the audiences to whom this will be helpful, but, if you're in the target audience for one of the book's strong points, you're likely to find it very rewarding.
-- wiredweird
Click Here to see more reviews about: Parallel Algorithms (Chapman and Hall/CRC Numerical Analysis and Scientific Computation Series)
Focusing on algorithms for distributed-memory parallel architectures, Parallel Algorithms presents a rigorous yet accessible treatment of theoretical models of parallel computation, parallel algorithm design for homogeneous and heterogeneous platforms, complexity and performance analysis, and essential notions of scheduling. The book extracts fundamental ideas and algorithmic principles from the mass of parallel algorithm expertise and practical implementations developed over the last few decades.
In the first section of the text, the authors cover two classical theoretical models of parallel computation (PRAMs and sorting networks), describe network models for topology and performance, and define several classical communication primitives. The next part deals with parallel algorithms on ring and grid logical topologies as well as the issue of load balancing on heterogeneous computing platforms. The final section presents basic results and approaches for common scheduling problems that arise when developing parallel algorithms. It also discusses advanced scheduling topics, such as divisible load scheduling and steady-state scheduling.
With numerous examples and exercises in each chapter, this text encompasses both the theoretical foundations of parallel algorithms and practical parallel algorithm design.
0 comments:
Post a Comment