Springe zum Hauptinhalt
Professur Praktische Informatik
Projekte


Communicating Multiprocessor-Tasks

Prof. Dr. G. Rünger
Fakultät für Informatik
Technische Universität Chemnitz

Overview

Example for a CM-task graph and a corresponding schedule

The scalability of parallel applications on large distributed memory platforms is often impacted by expensive global communication and synchronization operations. The communication overhead of many applications can be reduced by exploiting coarse grained parallelism. Many approaches based on parallel tasks have been proposed to tackle this problem, but usually the interaction between parallel tasks is restricted to input-output relations.

The programming model of Communicationg Multiprocessor-Tasks (CM-tasks) extends previous models based on parallel tasks by additionally supporting an interaction between parallel tasks during their execution. This allows a more flexible structuring of parallel applications and enables the use of optimized communication patterns for data exchanges.

In particular, the CM-task programming model

  • supports the exploitation of task parallelism (between CM-tasks) and data parallelism (within CM-tasks) at the same time;
  • provides two possibilities for the interaction of the CM-tasks of program: input-output relations (P-relations) and communication relations (C-relations);
  • allows a flexible mapping of the CM-tasks onto the execution resources of a parallel platform by leaving the flexibility to execute independent CM-tasks concurrently or one after another;
  • includes software support in form of the CM-task compiler framework to generate efficient implementations starting with a user-provided specification program.


CM-task Compiler Framework

Overview of the CM-task Compiler Framework

The CM-task compiler framework supports the application developer in creating efficient implementations of CM-task programs. The framework requires the user to provide a platform-independent specification program, a machine description and implementations of the parallel modules in form of parallel functions, e.g. as a C+MPI function. The core of the framework is the CM-task compiler that translates the user-provided specification program and machine description into an executable coordination program using several transformation steps. Additionally, the framework includes runtime libraries to support the execution of the coordination program produced.

The platform-independent specification program includes

  • the definition of the interfaces of the user-provided parallel functions;
  • cost estimates for the parallel functions depending on the number of executing processors;
  • description of the structure of the application, i.e., the interactions between the CM-tasks used.
The machine description file defines
  • the number of processors of the parallel target platform;
  • the computational performance of the processors;
  • the communication performance of the interconnection network.
The coordination program produced includes
  • an underlying schedule that is adapted to a specific parallel platform;
  • management code for the creation of the processor groups and the execution of CM-tasks on these processor groups;
  • data re-distribution operations to guarantee a correct data flow between CM-tasks;
  • dynamic load balancing to adapt the sizes of the processor groups to the computational work performed (optional).


Cooperation

This project is a collaboration between the Chemnitz University of Technology and the Bayreuth University.

Publications on CM-tasks

  • Dümmler, J.; Rauber, T.; Rünger, G.: Programming Support and Scheduling for Communicating Parallel Tasks. In: Journal of Parallel and Distributed Computing, Bd. 73, Nr. 2: S. 220-234. Elsevier  –  ISSN 0743-7315, 2013. DOI: 10.1016/j.jpdc.2012.09.017 Onlineressource verfügbar
  • Dümmler, J.; Rauber, T.; Rünger, G.: Scheduling Support for Communicating Parallel Tasks. In: Rajopadhye, S.; Strout, M. M. (Eds.): Languages and Compilers for Parallel Computing: 24th International Workshop, LCPC 2011, Fort Collins, CO, USA, September 8-10, 2011. Revised Selected Papers (LNCS, Bd. 7146): S. 252-267. Springer  –  ISBN 978-3-642-36035-0, 2013. Onlineressource verfügbar
  • Dümmler, J.: Interaction patterns for concurrently executed parallel tasks. In: Handlovičová, A.; Minarechová, Z.; Ševčovič, D. (Eds.): ALGORITMY 2012, 19th Conference on Scientific Computing: S. 261-271. Slovak University of Technology in Bratislava  –  ISBN 978-80-227-3742-5. Vysoké Tatry - Podbanské, Slovakia, 2012. Onlineressource verfügbar
  • Dümmler, J.; Rauber, T.; Rünger, G.: Semi-dynamic Scheduling of Parallel Tasks for Heterogeneous Clusters. In: Proceedings of the 10th International Symposium on Parallel and Distributed Computing (ISPDC 2011): S. 1-8. IEEE  –  ISBN 978-1-4577-1536-5. Cluj-Napoca, Romania, 2011. DOI: 10.1109/ISPDC.2011.11 Onlineressource verfügbar
  • Dümmler, J.; Rauber, T.; Rünger, G.: Component-Based Programming Techniques for Coarse-grained Parallelism. In: Proceedings of the High Performance Computing Symposium 2011 (HPC 2011) (Simulation Series Volume 43#2): S. 4-11. Curran Associates, Inc.  –  ISBN 978-1-6178-2840-9, 2011.
  • Dümmler, J.; Rauber, T.; Rünger, G.: Mixed Programming Models using Parallel Tasks. In: Dongarra, J.; Hsu, C.-H.; Li, K.-C.; Yang, L. T.; Zima, H. (Eds.): Handbook of Research on Scalable Computing Technologies: S. 246-275. Information Science Reference  –  ISBN 978-1-60566-661-7, 2009. Onlineressource verfügbar
  • Dümmler, J.; Rauber, T.; Rünger, G.: A Transformation Framework for Communicating Multiprocessor-Tasks. In: Proc. of the 16th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP 2008): S. 64-71. IEEE Computer Society  –  ISBN 978-0-7695-3089-5. Toulouse, France, 2008. Onlineressource verfügbar
  • Dümmler, J.; Rauber, T.; Rünger, G.: Communicating Multiprocessor-Tasks. In: Languages and Compilers for Parallel Computing: 20th International Workshop, LCPC 2007, Urbana, IL, USA, October 11-13, 2007, Revised Selected Papers (LNCS, Bd. 5234): S. 292-307. Springer  –  ISBN 978-3-540-85260-5, 2008. Onlineressource verfügbar

Related Publications on Parallel Tasks

  • Dümmler, J.; Rünger, G.: Layer-Based Scheduling of Parallel Tasks for Heterogeneous Cluster Platforms. In: Kołodziej, J.; Martino, B. Di; Talia, D.; Xiong, K. (Eds.): Algorithms and Architectures for Parallel Processing: 13th International Conference, ICA3PP 2013, Vietri sul Mare, Italy, December 18-20 (LNCS, Bd. 8285): S. 30–43. Springer  –  ISBN 978-3-319-03858-2, 2013. DOI: 10.1007/978-3-319-03859-9_3 Onlineressource verfügbar
  • Dümmler, J.; Rauber, T.; Rünger, G.: Scalable Computing with Parallel Tasks. In: Proc. of the 2nd IEEE Workshop on Many-Task Computing on Grids and Supercomputers (MTAGS '09). ACM  –  ISBN 978-1-60558-714-1. Portland, Oregon, USA, 2009. DOI: 10.1145/1646468.1646477 Onlineressource verfügbar
  • Dümmler, J.; Rauber, T.; Rünger, G.: Mapping Algorithms for Multiprocessor Tasks on Multi-Core Clusters. In: Proc. of the 37th International Conference on Parallel Processing (ICPP 2008): S. 141-148. IEEE Computer Society  –  ISBN 978-0-7695-3374-2. Portland, Oregon, USA, 2008. DOI: 10.1109/ICPP.2008.42 Onlineressource verfügbar