COMP34812 Natural Language Understanding syllabus 2021-2022
COMP34812 Natural Language Understanding
Level 3
Credits: 10
Enrolled students: 98
Course leader: Riza Batista-Navarro
Additional staff: view all staff
Requisites
- Pre-Requisite (Compulsory): COMP34711
Assessment methods
- 50% Written exam
- 50% Practical skills assessment
Semester | Event | Location | Day | Time | Group |
---|---|---|---|---|---|
Sem 2 w20-27,31-34 | ASYNCHRONOUS | Williamson G.47 | Thu | 09:00 - 10:00 | - |
Sem 2 w20-27,31,33-34 | Lecture | IT407 | Mon | 09:00 - 10:00 | - |
Sem 2 w23,25,27,32,34 | Workshop | IT407 | Fri | 10:00 - 11:00 | - |
Overview
Drawing from concepts covered in the prerequisite COMP34711: Natural Language Processing unit, this unit will enable students to look more deeply into how machines analyse and recognise meaning expressed in natural language. In this unit, students will gain hands-on experience in investigating solutions to a number of natural language understanding tasks. This will provide students with the know-how required to develop technologies for real-world applications enabling communication between humans and machines, which have become increasingly ubiquitous and indispensable.
Aims
The unit aims to:
- introduce students to the concepts and computational methods that enable machines to understand and interpret natural language
- explain the various tasks that underpin natural language understanding, and provide an overview of the state-of-the-art solutions to these tasks as well as their real-world applications
Syllabus
- Introduction to NLU; Task formulations and applications
- Meaning representations: symbolic parsing and logical representations of sentences
- Vector-based representations (contextualised embeddings)
- Neural networks and neural language models
- Evaluation of models
- Sequence classification and textual entailment (and applications)
- Sequence labelling (and applications)
- Machine reading comprehension (and applications)
- Sequence-to-sequence translation (and applications)
- Limits and weaknesses of state-of-the-art approaches to NLU
Teaching methods
Asynchronous lectures (weekly)
Synchronous workshops (weekly)
Labs (fortnightly)
Feedback methods
Discussions and live coding during workshops (weekly)
Labs to support coursework (fortnightly)
Cohort-level feedback on exam
Study hours
- Lectures (20 hours)
- Practical classes & workshops (10 hours)
Employability skills
- Analytical skills
- Problem solving
- Written communication
Learning outcomes
On successful completion of this unit, a student will be able to:
- To discuss the formulation of different natural language understanding tasks as sequence processing tasks e.g., sequence classification, sequence-to-sequence translation and sequence labelling.
- To differentiate between different types of parsing algorithms and apply them to natural language data to produce meaning representations.
- To compare different approaches to tasks such as named entity recognition and sentiment analysis.
- To relate natural language understanding tasks to applications such as question answering and conversational agents, among others.
- To develop a solution to a natural language understanding task with application to a real-world problem
Reading list
Title | Author | ISBN | Publisher | Year |
---|---|---|---|---|
null | Daniel W. Otter, Julian R. Medina and Jugal K. Kalita | null | null | null |
null | Ivano Lauriola, Alberto Lavelli and Fabio Aiolli | null | null | null |
null | Amirsina Torfi, Rouzbeh A. Shirvani, Yaser Keneshloo, Nader Tavaf and Edward A. Fox | null | null | null |
Speech and Language Processing | . Daniel Jurafsky & James H. Martin. | null | null | null |
Speech and Language Processing | . Daniel Jurafsky & James H. Martin. | null | null | null |
Speech and Language Processing | . Daniel Jurafsky & James H. Martin. | null | null | null |
Speech and Language Processing 15 Logical Representations of Sentence Meaning | . Daniel Jurafsky & James H. Martin. Herman Melville, Moby Dick | null | null | null |
Speech and Language Processing | Daniel Jurafsky & James H. Martin | null | null | null |
Speech and Language Processing | Daniel Jurafsky & James H. Martin | null | null | null |
Speech and Language Processing | Daniel Jurafsky & James H. Martin | null | null | null |
Speech and Language Processing | Daniel Jurafsky & James H. Martin | null | null | null |
Speech and Language Processing | Daniel Jurafsky & James H. Martin | null | null | null |
Speech and Language Processing | . Daniel Jurafsky & James H. Martin. | null | null | null |
Speech and Language Processing | . Daniel Jurafsky & James H. Martin. | null | null | null |
The handbook of computational linguistics and natural language processing | null | 9781118448670 | Wiley-Blackwell | 2013. |
How to Fine-Tune BERT for Text Classification? | Chi Sun, Xipeng Qiu * , Yige Xu, Xuanjing Huang | null | null | null |
HIERARCHICAL TRANSFORMERS FOR LONG DOCUMENT CLASSIFICATION | Raghavendra Pappagari , Piotr ˙ Zelasko, Jesús Villalba, Yishay Carmiel, and Najim Dehak | null | null | null |
Speech and Language Processing | Daniel Jurafsky & James H. Martin | null | null | null |
Speech and Language Processing | Daniel Jurafsky & James H. Martin | null | null | null |
Speech and Language Processing | Daniel Jurafsky & James H. Martin | null | null | null |
Zero-Shot Relation Extraction via Reading Comprehension | Omer Levy † Minjoon Seo † Eunsol Choi † Luke Zettlemoyer † ‡ † Allen | null | null | null |
End-to-end Neural Coreference Resolution | Kenton Lee † , Luheng He † , Mike Lewis ‡ , and Luke Zettlemoyer † * † Paul G. Allen | null | null | null |
BERT for Coreference Resolution: Baselines and Analysis | Mandar Joshi † Omer Levy § Daniel S. Weld †ǫ Luke Zettlemoyer † § † Allen | null | null | null |
Speech and Language Processing | . Daniel Jurafsky & James H. Martin. | null | null | null |
Speech and Language Processing | . Daniel Jurafsky & James H. Martin. | null | null | null |
A Unified MRC Framework for Named Entity Recognition | Xiaoya Li ♣ , Jingrong Feng ♣ , Yuxian Meng ♣ , Qinghong Han ♣ , Fei Wu ♠ and Jiwei Li ♣ ♣ Shannon.AI | null | null | null |
Speech and Language Processing | Daniel Jurafsky & James H. Martin | null | null | null |
null | Wafaa S. El-Kassas, Cherif R. Salama, Ahmed A. Rafea, Hoda K. Mohamed | null | null | null |