Skip to navigation | Skip to main content | Skip to footer
Menu
Menu

This is an archived syllabus from 2019-2020

COMP35112 Chip Multiprocessors syllabus 2019-2020

COMP35112 Chip Multiprocessors

Level 3
Credits: 10
Enrolled students: 39

Course leader: Dave Lester


Additional staff: view all staff

Requisites

  • Pre-Requisite (Compulsory): COMP25212

Additional requirements

  • Students who are not from the School of Computer Science must have permission from both Computer Science and their home School to enrol.

Assessment methods

  • 70% Written exam
  • 30% Coursework
Timetable
SemesterEventLocationDayTimeGroup
Sem 2 Lecture Roscoe 1.008 Mon 11:00 - 12:00 -
Sem 2 w19-21,23-27,31-32 Lecture Zochonis TH D Fri 13:00 - 14:00 -
Sem 2 w22-23,25-27,31 Lab Toot 1 Mon 15:00 - 16:00 -
Themes to which this unit belongs
  • Computer Architecture

Overview

Due to technological limitations, it is proving increasingly difficult to maintain a continual increase in the performance of individual processors. Therefore, the current trend is to integrate multiple processors on to a single chip and exploit the resulting parallel resources to achieve higher computing power. However, this may require significantly different approaches to both hardware and software particularly for general purpose applications. This course will explore these issues in detail.

Syllabus

Introduction

Trends in technology, limitations and consequences. The move to multi-coreParallelism in programs, ILP, Thread Level, Data Parallelism.

Parallel Architectures

SIMD, MIMD, Shared Memory, Distributed Memory, strengths and weaknesses.

Parallel Programming

Multithreaded programming, Data parallel programming, Explicit vs Implicit parallelism, automatic parallelisation. The Case for Shared Memory. When to share?

Shared Memory Multiprocessors

Basic structures, the cache coherence problem. The MESI protocol. Limitations. Directory based coherence.

Programming with Locks and Barriers

The need for synchronisation. Problems with explicit synchronisation

Other Parallel Programming Approaches

MPI and OpenMP

Speculation

The easy route to automatic parallelisation?

Transactional Memory

Principles. Hardware and Software approaches

Memory Issues

Memory system design. Memory consistency

Other Architectures and Programming Approaches

GPGPUs, CUDA

Data Driven Parallelism

Dataflow principles and Functional Programing

Feedback methods

Written feedback on reports for laboratory exercises. Students who attempt previous exam questions can get feedback on their answers.

Study hours

  • Lectures (24 hours)

Employability skills

  • Analytical skills
  • Problem solving
  • Written communication
  • Other

Learning outcomes

On successful completion of this unit, a student will be able to:

  • describe the main issues, along with key proposed solutions, in the design and construction of chip multi-processor hardware (e.g. multi-core CPUs and GPUs) and related programming languages.
  • evaluate a number of specific examples of extensions to programming languages supporting the writing of correct and efficient code on shared memory multi-processors.
  • compare a number of extensions to hardware structures supporting, for example, memory coherence, synchronization, speculation and transactional memory, to improve the performance of code executing on chip multiprocessors.
  • compare a number of extensions to programming languages and supporting data structures, including so-called "lock-free" data structures, to improve the performance of code on chip multiprocessors.
  • evaluate the effectiveness of concurrency support in Java, including the use of threads, locks and barriers, in the context of experience gained with three simple parallel programming exercises.

Reading list

No reading list found for COMP35112.

Additional notes

Course unit materials

Links to course unit teaching materials can be found on the School of Computer Science website for current students.