COMP35112 Chip Multiprocessors syllabus 2019-2020
Due to technological limitations, it is proving increasingly difficult to maintain a continual increase in the performance of individual processors. Therefore, the current trend is to integrate multiple processors on to a single chip and exploit the resulting parallel resources to achieve higher computing power. However, this may require significantly different approaches to both hardware and software particularly for general purpose applications. This course will explore these issues in detail.
Trends in technology, limitations and consequences. The move to multi-coreParallelism in programs, ILP, Thread Level, Data Parallelism.
SIMD, MIMD, Shared Memory, Distributed Memory, strengths and weaknesses.
Multithreaded programming, Data parallel programming, Explicit vs Implicit parallelism, automatic parallelisation. The Case for Shared Memory. When to share?
Shared Memory Multiprocessors
Basic structures, the cache coherence problem. The MESI protocol. Limitations. Directory based coherence.
Programming with Locks and Barriers
The need for synchronisation. Problems with explicit synchronisation
Other Parallel Programming Approaches
MPI and OpenMP
The easy route to automatic parallelisation?
Principles. Hardware and Software approaches
Memory system design. Memory consistency
Other Architectures and Programming Approaches
Data Driven Parallelism
Dataflow principles and Functional Programing
Written feedback on reports for laboratory exercises. Students who attempt previous exam questions can get feedback on their answers.
- Lectures (24 hours)
- Analytical skills
- Problem solving
- Written communication
On successful completion of this unit, a student will be able to:
- describe the main issues, along with key proposed solutions, in the design and construction of chip multi-processor hardware (e.g. multi-core CPUs and GPUs) and related programming languages.
- evaluate a number of specific examples of extensions to programming languages supporting the writing of correct and efficient code on shared memory multi-processors.
- compare a number of extensions to hardware structures supporting, for example, memory coherence, synchronization, speculation and transactional memory, to improve the performance of code executing on chip multiprocessors.
- compare a number of extensions to programming languages and supporting data structures, including so-called "lock-free" data structures, to improve the performance of code on chip multiprocessors.
- evaluate the effectiveness of concurrency support in Java, including the use of threads, locks and barriers, in the context of experience gained with three simple parallel programming exercises.
COMP35112 does not have a specified reading list.
Course unit materials
Links to course unit teaching materials can be found on the School of Computer Science website for current students.