-
Research School Irregular
Published: Wednesday, 28 August 2019A newsletter for PGR
Research School Irregular - Contents
[ top ]Wider Research Community
Seminar Double Feature Today
Come out for one or both!
First up, graph stuff!
Graph Programming and Evolving Graphs
===Timothy Atkinson, University of York
Wednesday 28th August, 1pm. Room 2.19.
Abstract:
Rule-based graph programming is a rich and deep topic. By sequentially combining rewrite systems defined over graphs, we gain access to complex transformations of graphs which are otherwise non-trivial to describe or implement. In my own work, I have used graph programming extensively to implement effective graph-based evolutionary algorithms which outperform standard approaches from the literature.
This talk is split into 2 parts:
In part 1, I will give a short general tutorial on the graph programming language GP 2. We will see how GP 2 programs can be given which concisely solve various problems from graph theory. As a final example, a GP 2 program will be given which transforms Bayesian Network DAGs into PDAG (Partially-Directed Acyclic Graph) representations of the equivalence class to which that DAG belongs.
In part 2, I will introduce the evolutionary algorithm 'Evolving Graphs by Graph Programming'. Following a brief explanation of the representation and genetic operators (GP 2 programs) utilised, we will see how this technique can be used to synthesise digital circuits from individual logic gates. The approach will then be extended to evolve artificial neural networks, which will be shown to rapidly train controllers for pole-balancing problems.Then we move on to difficult choices!
Predicting the Difficulty of Multiple-Choice Questions in a High-stakes Medical Exam
Published: Thu, 22 Aug 2019 13:28:15 +0100
School Seminar on Wednesday 28th August 2019 at 2pm in The Atlas SuiteSpeaker: Victoria Yaneva
Title: Predicting the Difficulty of Multiple-Choice Questions in a High-stakes Medical Exam
Abstract: For many years, approaches from Natural Language Processing (NLP) have been applied to estimating reading difficulty, but relatively fewer attempts have been made to measure conceptual difficulty or question difficulty beyond linguistic complexity. In addition to expanding the horizons of NLP research, estimating the construct relevant difficulty of test questions has a high practical value because ensuring that exam questions are appropriately difficult is both one of the most important and one of the most costly tasks within the testing industry.
In this talk, I will present our ongoing experiments towards a method for estimating the difficulty of MCQs from a high-stakes medical exam, where all questions were deliberately written to a common reading level. To accomplish this, we extract a large number of linguistic features and embedding types, as well as features quantifying the difficulty of the items for an automatic question-answering system. Our results are compared to various baselines and the use of interpretable features allows drawing recommendations for item writing.