Skip to navigation | Skip to main content | Skip to footer

COMP20411: Subsymbolic Processing and Neural Networks (2008-2009)

This is an archived syllabus from 2008-2009

Subsymbolic Processing and Neural Networks
Level: 2
Credit rating: 10
Pre-requisites: COMP10020 or equivalent e.g. (MATH10662 and MATH10672 or MATH10111 and MATH10131 and MATH10212)
Co-requisites: No Co-requisites
Duration: 11 weeks in first semester
Lectures: 22 in total, 2 per week
Labs: 10 hours in total, 5 2-hour sessions, partly credited to COMP20910/COMP20920
Lecturers: Jonathan Shapiro
Course lecturer: Jonathan Shapiro

Additional staff: view all staff
Sem 1 w1-5,7-12 Lecture 1.4 Fri 15:00 - 16:00 -
Sem 1 w1-5,7-12 Lecture 1.4 Thu 16:00 - 17:00 -
Sem 1 w2,4,7,9,11 Examples LF15 Thu 10:00 - 11:00 F
Sem 1 w2,4,7,9,11 Examples LF15 Thu 11:00 - 12:00 G
Sem 1 w3,5,8,10,12 Lab Toot 1 Mon 14:00 - 16:00 F
Sem 1 w3,5,8,10,12 Lab Toot 0 Tue 15:00 - 17:00 G
Assessment Breakdown
Exam: 80%
Coursework: 0%
Lab: 20%
Degrees for which this unit is optional
  • Artificial Intelligence BSc (Hons)


To introduce methods for extracting rules or learning from data, and provide the necessary mathematical background to enable the student to understand how the methods work and how to get the best performance from them. The course also includes heuristic problem-solving methods such as genetic algorithms. This course is pitched towards any student with a mathematical or scientific background who is interested in adaptive techniques for learning from data.

Learning Outcomes

Upon completion of the course, the student should:

Evaluate whether a learning system is appropriate for a particular problem. (B)
Understand how to use data for learning, model selection, and testing. (B,D)
Understand generally the relationship between model complexity and model performance, and be able to use this to design a strategy to improve an existing system. (B,D)
Understand the advantages and disadvantages of the learning systems studied in the course, and decide which is appropriate for a particular application. (B)
Be able to apply neural networks to simple data sets and evaluate their performance. (A)
Make a naive Bayes classifier, and interprete the results as probabilities. (A,B,D)
Devise greedy algorithms and genetic algorithms for solving optimization problems. (A)

Assessment of Learning outcomes

By examination: Learning outcomes 1,2,3,4,5,7 by laboratory: Learning outcomes: 2,3,5,6,7

Contribution to Programme Learning Outcomes

A1, A2, A5, B1, B3, D6 (especially probability and statistics)



Overview of approach. Symbolic versus subsymbolic methods, hand-built knowledge versus model extraction from data.

Introduction to learning theory and model evaluation [2]

The need to validate models learned from data. Techniques of performance estimation and validation. Generalization.

Neural Networks [7]

Perceptrons - learning linear discriminants, limitations of perceptrons. Multilayer perceptrons - learning, example applications.

Bayesian decisions and classification [6]

Basic probability theory and probabilistic modelling. Decisions and risk. Bayesian classification. The naive Bayes classifier.

Non-symbolic search techniques [6]

Greedy algorithms, hillclimbing, genetic algorithms. Application of genetic algorithms to reinforcement learning.