Skip to main content

Computer Vision

Our recent work has focused on activity analysis from video, with fundamental research on categorisation, tracking, segmentation and motion modelling, through to the application of this research in several areas. Part of the work is exploring the integration of vision within a broader cognitive framework that includes audition, language, action, and reasoning. Details of our past work and alumni can be found here.


Who we are

David Hogg

Professor of Artificial Intelligence, Lab Director

Andy Bulpitt

Professor of Computer Science

Anthony (Tony) Cohn

Professor of Automated Reasoning

Rebecca Stone

PhD Research Student

Mohammed Alghamdi

PhD Research Student

Caitlin Howarth

PhD Research Student

Jose Sosa Martinez

PhD Research Student

Fangjun Li

PhD Research Student

Research summaries

Re-animating Characters from TV Shows

Learning appearance and language characteristics from TV shows for re-animating a talking head

Activity monitoring and recovery

Learning and predicting activities in an egocentric setup, applied to equipment workflow

Activity Learning

Learning about the activities within a scene, and the objects involved in these activities

Seeing to learn

Observational learning of robotic manipulation tasks

Tracking carried objects

Carried object detection is applied using geometric shape properties and tracking is performed using spatio-temporal consistency between the object and the person

Unsupervised activity analysis

Learning about activities observed from a mobile robot

Facial animation

Synthesise an interactive agent by learning from the interactive behaviour of people

Bicycle theft detection

Resolving visual ambiguity by finding consistent explanations, applied to the detection of theft from bicycle racks

Carried bag detection

Detecting large objects (e.g. bags) carried by pedestrians from video

Learning table-top games

Learn about the objects and patterns of moves used in simple table-top games, and then apply these to play the game

Modelling pedestrian intentions

Detecting atypical pedestrian pathways, assuming a simple model of goal-directed navigational behaviou

Consistent tracking

Enforcing global spatio-temporal consistency to enhance reliability of moving object tracking and classification

Traffic interaction

Modelling traffic interaction using learnt qualitative spatio-temporal relations and variable length Markov models

Vehicle theft

Detecting unusual events by modelling simple interactions between people and vehicles

Anomaly detection in video

Motion representations learning using a convolutional autoencoder with a sparsity constraint; and normality modelling and anomalies detection using one-class SVMs.