Department of Computing
M.Eng. Fourth Year Course
CO405H (Autumn 2016)
Dataflow Computing in Space
Oskar Mencer (email@example.com)
Georgi Gaydadjiev (firstname.lastname@example.org)
Dataflow computing in space at work
Various OpenSPL based designs can be found on the AppGallery.
The aim of this course is to introduce you to Dataflow Computing in Space.
After completing this course you will understand:
will know how to address design implications:
- the fundamental methods and techniques
- the multiple dimensions of computing
- the main differences with temporal computing;
and will become proficient in:
- at application and algorithmic levels
- by solving system-level bottlenecks
- by customizing data choreography
- by using arithmetic level optimizations;
- splitting applications into controlflow and dataflow
- implementing and debugging high-performance parallel algorithms in space
- reasoning about spatial complexity of algorithms, arithmetic and data moves
- computational area and bandwidth trade-offs to improve performance
- achieving maximal performance on a specific spatial computer implementation.
Learning Outcomes - Knowledge and Understanding
To know the specific material covered by the Syllabus, including the ability to do the following:
- To explain the multidimensional trade-offs used in building spatial computers
- To describe the conceptual properties of controlflow and dataflow
- To explain how dataflow computing in space impacts energy efficiency
- To explain the analytical models behind performance, computational density and power consumption estimation of spatial computing systems
- To describe the impact of arithmetic transformations in space
- To explain how data orchestration determines performance and energy efficiency of spatial computers
Learning Outcomes - Intellectual Skills
- To reason about the spatial complexity of algorithms, of arithmetic operations and of data movements
- To judge different techniques for performance, computational density and power consumption improvements
- To identify the application specific limitations in terms of spatial computing
- To judge different options for implementing large numeric kernels in space
Learning Outcomes - Practical Skills
- To optimise the use of the available arithmetic area and data bandwidth
- To apply the knowledge and the experience gained in creating maximum performance spatial systems
- To make informed decisions about spatial system specification, design and debug
Learning Outcomes - Transferable Skills
- To relate common algorithmic properties to spatial computer implementation techniques
- To apply the insights gained to improving conventional computer systems architectures
Description and Assessment
The course covers the whole range of issues related to programming and optimization of Spatial Computing Systems. The written examination (open book) in w11 contributes 85% to the overall mark while the individula project contributes 15%. Unassessed practice test for formative assessment only are available here (TBD).
Java, Computer Architecture I, Logic Design, Algorithms and Data Structures
- Dataflow computing in space;
- Programming models, execution models, and memory models;
- High-performance and scientific programs;
- System architecture and networking;
- Spatial arithmetic and number representation, spatial data choreography, communication, locality management, custom encoding, globally optimal scheduling and custom patterns;
- Performance estimation, debugging in space, minimum frequency computing, energy efficient computing, total cost of ownership, and spatial computers benchmarking.
- Validity of the single processor approach to achieving large scale computing capabilities, Gene M Amdahl, AFIPS spring joint computer conference, IBM Sunnyvale, California, 1967.
- Some Computer Organizations and Their Effectiveness, Michael J Flynn, IEEE Trans. Computers, CÐ21 (9): 948Ð960, Sept 1972.
- Computer Architecture: Pipelined And Parallel Processor Design, (Chapters 1-7) Michael J Flynn, May 1995.
- iWarp: An integrated solution to high-speed parallel computing, S. Borkar, R. Cohn, G. Cox, S. Gleason, T. Gross, H. T. Kung, M. Lam, B. Moore, C. Peterson, J. Pieper, L. Rankin, P. S. Tseng, J. Sutton, J. Urbanski, and J. Webb. In Proceedings of IEEE/ACM SC '88, pages 330-339, Orlando, Florida, November 1988.
- Decoupled access/execute computer architectures, J. E. Smith, Computer Systems, ACM Transactions on; Volume 2, Issue 4, pp 289-308, November 1984.
- OpenSPL Specification, v1.0, http://www.openspl.org
- Sparse Coefficient polynomial approximations for hardware implementations, N. Brisebarre, J. M. Muller and A. Tisserand, In Proc. of 38th Asilomar Conference on Signals, Systems and Computers, pp. 532-535, California, USA, 2004.
- Moving from Petaflops to Petadata, Communications of the ACM, Vol. 56 No. 5, May 2013.
- Finding the Right Level of Abstraction for Minimizing Operational Expenditure, Workshop on High Performance Computational Finance at SC11, November 2011.
- Rapid Computation of Value and Risk for Derivatives Portfolios, Concurrency and Computation: Practice and Experience, Special Issue Paper, July 2011.
- Beyond Traditional Microprocessors for Geoscience High-Performance Computing Applications, IEEE Micro, vol. 31, no. 2, March/April 2011.
Lecture Schedule / Notes
Web based Design Environment (WebIDE)
You can find WebIDE here. Please note that you can use the departmental computers for the simulation. Installation on your notebook will not work due to various licensing restrictions of the different tools involved.
Other useful information
Self-assessment using unassessed exercises (with model answers).
There are no office hours. Please send us an e-mail in case you have a question.
The exams of the last years can be found here for 2014 and 2015.
Page maintained by: Georgi Gaydadjiev
Last updated: 5 October 2016