First overall place in IMechE Formula Student - AI 2020
2020 Formula Student - AI

Oxford Brookes Racing - Autonomous, Autonomous Driving group, Visual Artificial Intelligence Lab

The Oxford Brookes Racing - Autonomous team conquered the second edition of the UK Formula Student AI competition!
This represents an enormous success in pulling together AI and computer vision, localisation, path planning, control strategies and overall integration of the system to the vehicle.

There are two main classes in Formula Student Autonomous: ADS: Autonomous Driving System (which is about building your own fully Autonomous racing car and compete with it), and DDT: Dynamic Driving Task (for which participants need to enter the software for use on the IMechE's own vehicle),
OBR - Autonomous, which is strongly supported by the Visual AI Lab, has won the DDT class.
In addition, there was a Simulation event as new, separate event in place of the track events (due to Covid-19), and we won that too!

Overall results are as follows:

Design: 1st place Business: 1st place Real world: 3rd place Overall DDT event: 1st place

New Simulation event: 1st place

The detailed result breakdowns can be downloaded below.

Leaderboard of the DDT class
Leaderboard of the Simulation event

Best Reviewer award
ICCV 2019 - the CVF/IEEE International Conference on Computer Vision

Gurkirt Singh

PhD student Gurkirt Singh has received a Best Reviewer award by ICCV 2019, the International Conference on Computer Vision, and top venue in the field of computer vision. The award recognises Gurkirt's outstanding work in assessing other scientists' work in a fair and accurate manner.

Congratulations to Guru!

3nd place in the UK 2019 Formula Student - AI competition
2019 Formula Student - AI

Autonomous Tech student society, Autonomous Driving group, Visual Artificial Intelligence Lab

Our brand new Autonomous Formula Student team participated in the first edition of the UK Formula Student AI competition, coming 3rd place overall. This represents an enormous success in pulling together computer vision, localisation, path planning, control strategies and overall integration of the system to the vehicle.

Of particular note is the 1st place in 'Real World Autonomous Driving' presentation. In close collaboration with the Visual Artificial Intelligence Laboratory, the team delivered an incredibly impressive presentation of real autonomous driving challenges which are the subject of current research at OBU - leading them to score full marks in this element of the competition.


2nd place in the action detection challenge
2017 CVPR Charades

Gurkirt Singh, Andreas Lehrmann, Leonid Sigal

The Charades Activity Challenge aims towards automatic understanding of daily activities, by providing realistic videos of people doing everyday activities. The Charades dataset is collected for an unique insight into daily tasks such as drinking coffee, putting on shoes while sitting in a chair, or snuggling with a blanket on the couch while watching something on a laptop. This enables computer vision algorithms to learn from real and diverse examples of our daily dynamic scenarios. The challenge consists of two separate tracks: classification and localization track. The classification track is to recognize all activity categories for given videos ('Activity Classification'), where multiple overlapping activities can occur in each video. The localization track is to find the temporal locations of all activities in a video ('Activity Localization').


Try It Award
OBSEA (the Oxford Brookes Social Entrepreneur Awards)

Misbah Munir

Oxford Brookes Social Entrepreneur Awards are awarded to the students, staff and faculty of Oxford Brookes for their enterprise idea. The aim of the enterprise should be to address a social issue to solve a problem. The category I competed for is 'Social Innovation' which includes the solutions provided through collaboration of different field (in my case it can be police, medical services, fire brigade). The OBSEA team define it as 'Social innovation is about finding radical new ideas, practical solutions and relationships to effectively address social needs and problems.'

The basic idea was developed by the need to eradicating the security loopholes in the system by constant threats (for example, terror attacks) posed by current situation. We can automate the crowd monitoring system in real time to analyse the crowd behaviours for suspicious activities through processing video feed captured by available security cameras, it would help in identifying potential risks and eradicating them by flagging the potential threat. Human beings, by nature, have less attention span and can only focus on few things at a time, however, crowd monitoring is a tedious job and require a lot of attention at many levels at the same time. Therefore, due to the availability of high processing machines, we have a good chance to use technology to our advantage for neutralizing the potential threats to people's safety.

Feedback from judges: "You came across as very passionate in your project in a very professional way, you showed focus and we are very supportive of you undertaking a feasibility study once you have prepared a detailed plan of how the funding would be spent. You may want to research in more detail which obstacles you may be facing in undertaking this research. We would recommend using an interdisciplinary approach bringing expertise from various backgrounds. In terms of feasibility and practicality, you could consider a potentially less 'complicated' subject using surveillance for locating children getting lost in public spaces."

2nd place in the action detection challenge
2016 CVPR ActivityNet

Gurkirt Singh, Fabio Cuzzolin

The ActivityNet Large Scale Activity Recognition Challenge is a half-day workshop to be held on July 1 in conjuction with CVPR 2016, in Las Vegas, Nevada. In this workshop, we establish a new challenge to stimulate the computer vision community to develop new algorithms and techinques that improve the state-of-the-art in human activity understanding. The data of this challenge is based on the newly published ActivityNet benchmark. The challenge focuses on recognizing high-level and goal oriented activities from user generated videos, similar to those found in internet portals. This challenge is tailored to 200 activity categories in two different tasks. (a) Untrimmed Classification Challenge: Given a long video, predict the labels of the activities present in the video; (b) Detection Challenge: Given a long video, predict the labels and temporal extents of the activities present in the video.


Reading group prize
ICVSS 2015 - the International Computer Vision Summer School

Suman Saha

The ninth edition of the International Computer Vision Summer School aims to provide both an objective and clear overview and an in-depth analysis of the state-of-the-art research in Computer Vision and Machine Learning.
During a typical PhD and subsequent research year, students will read probably more than 100 papers. Reading research papers is a skill that can be acquired and that is very different from reading a novel. This session is to introduce students to that skill.

Next 10 Award
Oxford Brookes University - Faculty of Technology

Fabio Cuzzolin

Research accelerator programme, awarded to the top emerging researchers in the Faculty of Technology.

Application in PDF

Outstanding Reviewer Award
British Machine Vision Conference (BMVC 2012)

Fabio Cuzzolin

Short-listed for the Best Paper Award
British Machine Vision Conference (BMVC 2012)

Michael Sapienza, Fabio Cuzzolin and Philip Torr

For the paper: “Learning discriminative space-time actions from weakly labelled videos"

Current state-of-the-art action classification methods extract feature representations from the entire video clip in which the action unfolds, however this representation may include irrelevant scene context and movements which are shared amongst multiple action classes. For example, a waving action may be performed whilst walking, however if the walking movement and scene context appear in other action classes, then they should not be included in a waving movement classifier. In this work, we propose an action classification framework in which more discriminative action subvolumes are learned in a weakly supervised setting, owing to the difficulty of manually labelling massive video datasets. The learned models are used to simultaneously classify video clips and to localise actions to a given space-time subvolume. Each subvolume is cast as a bag-offeatures (BoF) instance in a multiple-instance-learning framework, which in turn is used to learn its class membership. We demonstrate quantitatively that even with single fixedsized subvolumes, the classification performance of our proposed algorithm is superior to the state-of-the-art BoF baseline on the majority of performance measures, and shows promise for space-time action localisation on the most challenging video datasets

Best Poster Prize
INRIA Visual Recognition and Machine Learning Summer School (VRML 2012)

Michael Sapienza, Fabio Cuzzolin and Philip Torr

For the poster: “Learning discriminative space-time actions from weakly labelled videos".

Best Poster Award
Seventh International Symposium on Imprecise Probabilities - Theory and Applications (ISIPTA'11)

Fabio Cuzzolin

For the poster: “Geometric conditional belief functions in the belief space".

In this paper we study the problem of conditioning a belief function (b.f.) b with respect to an event A by geometrically projecting such belief function onto the simplex associated with A in the space of all belief functions. Defining geometric conditional b.f.s by minimizing Lp distances between b and the conditioning simplex in such ``belief" space (rather than in the ``mass" space) produces complex results with less natural interpretations in terms of degrees of belief. The question of weather classical approaches, such as Dempster's conditioning, can be themselves reduced to some form of distance minimization remains open: the generation of families of combination rules generated by (geometrical) conditioning appears to be the natural prosecution of this line of research.

Best Paper Award
Pacific Rim Conference on Artificial Intelligence (PRICAI'08)

Fabio Cuzzolin

For the paper: “Alternative formulations of the theory of evidence based on basic plausibility and commonality assignments"

In this paper we introduce indeed two alternative formulations of the theory of evidence by proving that both plausibility and commonality functions share the same combinatorial structure of sum function of belief functions, and computing their Moebius inverses called basic plausibility and commonality assignments. The equivalence of the associated formulations of the ToE is mirrored by the geometric congruence of the related simplices. Applications to the probabilistic approximation problem are briefly presented.