Latest News!

March 2021

The ROAD challenge Workshop is accepted as a full-day event @ ICCV 2021

Our proposal for an ICCV 2021 Workshop entitled "The ROAD Challenge: Event detection for situation awareness in autonomous driving" has been accepted as a full-day event at ICCV 2021.

Accordingly, the goal of this workshop is to put to the forefront of the research in autonomous driving the topic of situation awareness, intended as the ability to create semantically useful representations of dynamic road scenes in terms of the notion of road event, itself inspired by the central computer vision notion of ‘action’.

We propose to define a road event as a triplet E = (Ag; Ac; Loc) composed by a moving agent Ag, the action Ac it performs, and the location Loc in which this takes place (on the image plane if only video data is available, but potentially on a depth map if 3D info is at hand). Inspired by the standard practice in action detection, we propose to represent road events as 'tubes', i.e., time series of frame-wise bounding box detections, as the building blocks of an intermediate semantic representation of a dynamic road scene.

As a side effect of this proposal, the workshop also aims to stimulate a change of paradigm in the field of action detection, by shifting the focus from the objects/actors themselves and their appearance to what they do and the meaning of their behaviour, as the concept of action is here extended to apply to human-operated machinery as an extension of the human mind.


  • Fabio Cuzzolin (Oxford Brookes University)
  • Giuseppe Di Gironimo (University of Naples Federico II)
  • Andrew Bradley (Oxford Brookes University)
  • Reza Javanmard Alitappeh (University of Science and Technology of Mazandaran)
  • Gurkirt Singh (ETH Zurich)
  • Stanislao Grazioso (University of Naples Federico II)
  • Valentina Musat (University of Oxford)
Invited Speakers

  • Raquel Urtasun (University of Toronto)
  • Adrien Gaidon (Toyota Research Institute)
  • Daniela Rus (MIT)
  • Deva Ramanan (Carnegie Mellon)
  • Paul Newman (University of Oxford, Oxbotica)
March 15 2021

The International Workshop on Continual Semi-Supervised Learning @ IJCAI 2021

Our joint proposal for a IJCAI 2021 Workshop on Continual Semi-Supervised Learning, has been accepted as a full-day event at IJCAI 2021.

Whereas the continual learning problem has been recently the object of much attention in the machine learning community, it has been mainly approached from the point of view of preventing the model updated in the light of new data from ‘catastrophically forgetting’ its initial, useful knowledge and abilities. A typical example is that of an object detector which needs to be extended to include classes not originally in its list (e.g., ‘donkey’ in a farm setting), while retaining its ability to correctly detect, say, a ‘horse’. The unspoken assumption there is that we are quite satisfied with the model we have, whilst we wish to extend its capabilities to new settings and classes. An example of this focus is represented by the best paper award assigned at the latest ICML 2020 workshop on the topic.

This way of posing the continual learning problem, however, is in rather stark contrast with common real-world situations in which an initial model is trained using limited data, only for it to then be deployed without any additional supervision. Think of a person detector used for traffic safety purposes on a busy street. Even after having been trained extensively on the many available public datasets, experience shows that its performance in its target setting will likely be less than optimal. In this scenario, the objective is for the model to be incrementally updated using the new (unlabelled) data, in order to adapt to a target domain that is continually shifting with time (think of night/day and weekly/yearly cycles in the data captured by a camera outside an office block entrance).

The aim of this workshop is to formalise this form of continual learning, which we term continual semi-supervised learning (CSSL), and introduce it to the wider machine learning community, in order to mobilise the effort in this original direction. Secondly, it aims at providing clarity as to how training and testing should be designed in a continual setting.

March 26, 2021

Promotion to Professor level 3

On March 26, 2021 the Senior Academic Promotions Committee has considered and approved Fabio's application for promotion to Professor Level 3, with a contract backdated to 1 September 2020.

Fabio would like to thank all the external referees who were so kind as to support his application!
March 2021

SARAS-MESAD Challenge at MICCAI 2021

The SARAS challenge on Multi-domain Endoscopic Surgeon Action Detection (SARAS-MESAD), proposed and led by Vivek Singh, was accepted as a half-day event at MICCAI 2021.

The annual MICCAI conference attracts world leading biomedical scientists, engineers, and clinicians from a wide range of disciplines associated with medical imaging and computer assisted intervention. MICCAI 2021, the 24th International Conference on Medical Image Computing and Computer Assisted Intervention, will be held from September 27th to October 1st 2021.

In our SARAS work, we have captured endoscopic video data during radical prostatectomy under two different settings ('domains'): real procedures on real patients, and simplified procedures on artificial anatomies ('phantoms'). As shown in our MIDL 2020 challenge (over real data only), variations due to patient anatomy, surgeon style and so on dramatically reduce the performance of even state-of-the-art detectors compared to nonsurgical benchmark datasets. Videos captured in an artificial setting can provide more data, but are characterised by significant differences in appearance compared to real videos and are subject to variations in the looks of the phantoms over time. Inspired by these all-too-real issues, this challenge's goal is to test the possibility of learning more robust models across domains (e.g. across different procedures which, however, share some types of tools or surgeon actions; or, in the SARAS case, learning from both real and artificial settings whose list of actions overlap, but do not coincide).

The challenge provides two datasets for surgeon action detection: the first dataset (Dataset-R) is composed by 4 annotated videos of real surgeries on human patients, while the second dataset (Dataset-A) contains 6 annotated videos of surgical procedures on artificial human anatomies. All videos capture instances of the same procedure, Robotic Assisted Radical Prostatectomy (RARP), but with some difference in the set of classes. The two datasets share a subset of 10 action classes, while they differ in the remaining classes (because of the requirements of SARAS demonstrators). These two datasets provide a perfect opportunity to explore the possibility of exploiting multi-domain datasets designed for similar objectives to improve performance in each individual task.

Link to full challenge proposal description.

March 2021

The ROad event Awareness Dataset for Autonomous Driving (ROAD)

March 2021

Ajmal and Izzedin join the Lab

Two new members have recently joined the lab.

Ajmal Shahbaz is the new Huawei research fellow working on our project on 'Modelling complex activities in video'.
Ajmal received a PhD in electrical engineering from the University of Ulsan (UOU) in South Korea, for a thesis entitled 'Efficient Unsupervised and Supervised Change Detectors for Intelligent Video Analytics'. During his PhD he worked on change detection algorithm for intelligent surveillance systems, in particular using implenting in Keras a convolutional neural network on low-end hardware, but also on semantic segmentation for drone imagery, implemented in PyTorch.
He has published papers on top impact factor journals such as the IEEE Transactions on Industrial Electronics (IF: 7.515), the IEEE Transactions on Industrial Informatics (IF: 9.112) and IEEE Sensors Journal (IF: 3.076). He also published 27 proceedings papers.

Izzedin Teeti is the new PhD student in Autonomous Transport Systems, who will lead the effort of the Oxford Brookes Racing - Autonomous team of UG students as part of the IMechE Formula Student AI project.
Prior to joining us, Izzedin completed a BSc in Mechanical Engineering with the American University of Madaba, Jordan, with a GPA of 93%. He was then awarded an MSc in Robotics (with distinction) by King's College London, UK, where he was the recipient of a Prize for the best overall performance in the MSc in Robotics for the 2018-2019 cohort.
Izzedin is the founder of the Palestinian Artficial Intelligence Community, and is proficient in state-of-the-art algorithms such as YOLO, Inception, DeepFace, as well as RNN and LSTM models.

January 2021

New Knowledge Transfer Partnership with Supponor

A Knowledge Transfer Partnership (KTP) with Supponor was funded at the latest round 4 by Innovate UK.

Supponor provides commercially and technically proven solutions that deliver virtually enhanced advertising spaces in live sports broadcast. They do this by respectfully and authentically overlaying physical TV visible signage – such as dynamic perimeter LED boards or static billboards; by placing new or additional virtual assets in and around the field of play; or by delivering a combination of both.
More information is provided on the Supponor web site.

This project will utilise machine learning to achieve real-time understanding of video scenes and consistent segmentation of advertisement boards and pitch objects without the use of existing infrared cameras and hardware infrastructure.

A KTP Associate position will be advertised as soon as possible. Salary will be around GBP 35,000 per annum.

January 2021

The geometry of uncertainty

Fabio's new book The geometry of uncertainty - The geometry of imprecise probabilities is finally out.

The principal aim of this book is to introduce to the widest possible audience an original view of belief calculus and uncertainty theory. In this geometric approach to uncertainty, uncertainty measures can be seen as points of a suitably complex geometric space, and manipulated in that space, for example, combined or conditioned.

In the chapters in Part I, Theories of Uncertainty, the author offers an extensive recapitulation of the state of the art in the mathematics of uncertainty. This part of the book contains the most comprehensive summary to date of the whole of belief theory, with Chap. 4 outlining for the first time, and in a logical order, all the steps of the reasoning chain associated with modelling uncertainty using belief functions, in an attempt to provide a self-contained manual for the working scientist. In addition, the book proposes in Chap. 5 what is possibly the most detailed compendium available of all theories of uncertainty. Part II, The Geometry of Uncertainty, is the core of this book, as it introduces the author’s own geometric approach to uncertainty theory, starting with the geometry of belief functions: Chap. 7 studies the geometry of the space of belief functions, or belief space, both in terms of a simplex and in terms of its recursive bundle structure; Chap. 8 extends the analysis to Dempster’s rule of combination, introducing the notion of a conditional subspace and outlining a simple geometric construction for Dempster’s sum; Chap. 9 delves into the combinatorial properties of plausibility and commonality functions, as equivalent representations of the evidence carried by a belief function; then Chap. 10 starts extending the applicability of the geometric approach to other uncertainty measures, focusing in particular on possibility measures (consonant belief functions) and the related notion of a consistent belief function. The chapters in Part III, Geometric Interplays, are concerned with the interplay of uncertainty measures of different kinds, and the geometry of their relationship, with a particular focus on the approximation problem. Part IV, Geometric Reasoning, examines the application of the geometric approach to the various elements of the reasoning chain illustrated in Chap. 4, in particular conditioning and decision making. Part V concludes the book by outlining a future, complete statistical theory of random sets, future extensions of the geometric approach, and identifying high-impact applications to climate change, machine learning and artificial intelligence.

The book is suitable for researchers in artificial intelligence, statistics, and applied science engaged with theories of uncertainty. The book is supported with the most comprehensive bibliography on belief and uncertainty theory.

Download the book flyer here.

November 17 2020

Area Chair for ICCV 2021

Fabio has been invited to act as Area Chair for the upcoming International Conference on Computer Vision (ICCV 2021), which will be held in Montreal, Canada, from October 11 to October 17, 2021.

ICCV is the premier international computer vision event comprising the main conference and several co-located workshops and tutorials. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.

Call for Papers: follow this link.

Important dates:
  • Paper registration deadline: March 10, 2021
  • Paper submission deadline: March 17, 2021
  • Supplementary material deadline: March 24, 2021
  • Reviews Released to Authors: June 10, 2021
  • Rebuttal Due: June 17, 2021
  • Final Decisions to Authors: July 22, 2021
  • Camera ready due: August 17, 2021
  • Conference Dates: October 15-19 2021

October 27 2020

Epistemic AI is funded!

The Visual AI Lab has won funding for a €3M Future Emerging Technologies project, funded by the EU Horizon 2020 programme, entitled “Epistemic AI”.
Prof Fabio Cuzzolin will be the Coordinator of the project. The other two partners are KU Leuven (Belgium), led by Senior Researcher Dr Keivan Shariatmadar, and TU Delft (Netherlands), led by Associate Professor Neil Yorke-Smith.
The project will start in early 2021 and will have a duration of 48 months.

Although artificial intelligence (AI) has improved remarkably over the last years, its inability to deal with fundamental uncertainty severely limits its application. This proposal re-imagines AI with a proper treatment of the uncertainty stemming from our forcibly partial knowledge of the world.
As currently practised, AI cannot confidently make predictions robust enough to stand the test of data generated by processes different (even by tiny details, as shown by ‘adversarial’ results able to fool deep neural networks) from those studied at training time. While recognising this issue under different names (e.g. ‘overfitting’), traditional ML seems unable to address it in nonincremental ways. As a result, AI systems suffer from brittle behaviour, and find difficult to operate in new situations, e.g. adapting to driving in heavy rain or to other road users’ different styles of driving, e.g. deriving from cultural traits.
Epistemic AI’s overall objective is to create a new paradigm for a next-generation artificial intelligence providing worst-case guarantees on its predictions thanks to a proper modelling of real-world uncertainties.

October 5 2020

New Topic in Frontiers of Artificial Intelligence

Fabio Cuzzolin, Bogdan Cirstea and Barbara Sahakian are Topic Editors for a new topic in Frontiers of Artificial Intelligence.
The title of the topic is Theory of Mind in Humans and in Machines.

About this Research Topic

Theory of Mind (ToM) - the ability of the human mind to attribute mental states to others - is a key component of human cognition. ToM encompasses inferring others’ beliefs, desires, goals, and preferences. How humans can perform ToM is still an unresolved fundamental scientific problem. Furthermore, for a true understanding of ToM in humans, progress is required at multiple levels of analysis: computational, algorithmic, and physical. The same capability of inferring human mental states is a prerequisite for artificial intelligence (AI) to be integrated into human society. Autonomous cars, for example, will need to be able to infer the mental states of human drivers and pedestrians to predict their behavior. As AI becomes more powerful and pervasive its ability to infer human goals, desires, and intentions, even in ambiguous or new situations, will become ever more important.

The last decades have seen significant progress in the effort to decipher ToM, particularly at the computational level. In cognitive science, experiments with infants and children have started uncovering the basis of an intuitive ToM, while neuroscientific investigations have started revealing the major areas of the brain that are involved in ToM inferences. At the same time, new AI algorithms for inferring human mental states have been proposed with better scalability prospects and more complex applications. During the last several years, this momentum has been particularly fueled by deep learning. Despite these encouraging signs, the prospects of a full understanding of human ToM remain distant, while recent work has highlighted the insufficiency of current machine ToM methods to scale up to arbitrary levels of intelligence and to model the full complexity of human values and intentions. In this Research Topic, we want to address the problem of how ToM (inferring human beliefs, desires, goals, and preferences) can be implemented in machines, potentially drawing inspiration from how humans seem to achieve this. Thus, works making progress towards an understanding of how humans accomplish ToM are welcome.

This Research Topic aims to span across the fields of artificial intelligence, cognitive science, and neuroscience. Its intention is to formulate computational proposals of cognitive science and neuroscience-inspired Theory of Mind, compare the strengths and limitations of Theory of Mind, Inverse Reinforcement Learning, and other reward specification methods (e. g. learning from preferences), and establish common baselines, metrics, and benchmarks, and identify open questions. Topics of interest will include but are not limited to theoretical proposals, computational experiments, and case studies of:
  • Computational Theory of Mind
  • Learning from Demonstrations
  • Cognitive Models for Learning from Demonstration and Planning
  • Neuroscience-inspired models of Theory of Mind
  • Human-Robot Interaction
  • Cooperative Inverse Reinforcement Learning (assistance games)
  • Learning from Preferences
  • Learning by Observing Third-Person Demonstrations
  • Learning from Non-Expert Demonstrations
Keywords: Theory of Mind, Inverse Reinforcement Learning, Reward specification, Human-Machine Interaction, AI ethics, Value learning

Submission deadlines: Abstract Dec 4 2020; Manuscript April 3 2021.

Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

September 16 2020

Paper at ACCV 2020

A paper was accepted for publication at ACCV 2020 (the Asian Conference on Computer Vision), entitled Video-based crowd counting using a multi-scale optical flow pyramid network by Mohammad Asiful Hossain, Kevin Cannon, Daesik Jang, Fabio Cuzzolin and Zhan Xu.
The work is the result of joint effort with Huawei Canada's Vancouver IC Lab.

This paper presents a novel approach to the task of videobased crowd counting, which can be formalized as the regression problem of learning a mapping from an input image to an output crowd density map. Convolutional neural networks (CNNs) have demonstrated striking accuracy gains in a range of computer vision tasks, including crowd counting. However, the dominant focus within the crowd counting literature has been on the single-frame case or applying CNNs to videos in a frame-by-frame fashion without leveraging motion information. This paper proposes a novel architecture that exploits the spatiotemporal information captured in a video stream by combining an optical flow pyramid with an appearance-based CNN. Extensive empirical evaluation on five public datasets comparing against numerous state-of-the-art approaches demonstrates the efficacy of the proposed architecture, with our methods reporting best results on all datasets. Finally, a set of transfer learning experiments shows that, once the proposed model is trained on one dataset, it can be transferred to another using a limited number of training examples and still exhibit high accuracy.

The PDF version of the paper can be found: here.

July 27 2020

New book chapter

A new book chapter entitled Spatio-Temporal Action Instance Segmentation and Localisation was published in Modelling Human Motion - From Human Perception to Robot Design, edited by Nicoletta Noceti, Alessandra Sciutti and Francesco Rea.

The authors are Suman Saha, Gurkirt Singh, Michael Sapienza, Philip H. S. Torr and Fabio Cuzzolin. Suman, Gurkirt and Michael are all former PhD students with the Visual AI Lab. Philip Torr is a Professor with the Department of Engineering Science of Oxford University.

Current state-of-the-art human action recognition is focused on the classification of temporally trimmed videos in which only one action occurs per frame. In this work we address the problem of action localisation and instance segmentation in which multiple concurrent actions of the same class may be segmented out of an image sequence. We cast the action tube extraction as an energy maximisation problem in which configurations of region proposals in each frame are assigned a cost and the best action tubes are selected via two passes of dynamic programming. One pass associates region proposals in space and time for each action category, and another pass is used to solve for the tube’s temporal extent and to enforce a smooth label sequence through the video. In addition, by taking advantage of recent work on action foreground-background segmentation, we are able to associate each tube with class-specific segmentations. We demonstrate the performance of our algorithm on the challenging LIRIS-HARL dataset and achieve a new state-of-the-art result which is 14.3 times better than previous methods.

The chapter's entry can be found in the publisher's website here.
August 2020

Psychological Medicine editorial: "Knowing me, knowing you: Theory of mind in AI"

August 2020

Oxford Brookes Racing - Autonomous achieves 1st place overall in the 2020 Formula Student AI

July 9 2020

The MIDL 2020 SARAS-ESAD Challenge was a success!

The Challenge was selected, out of all conference challenges, to feature in Best of MIDL 2020 - click on the picture below to access the feature article.

Check out the SARAS-ESAD Challenge web site for more information and access to the baseline code and the dataset.
February 11 2020

Launch of the Institute for Ethical AI

We are excited to invite you to the official launch of the Oxford Brookes University Institute for Ethical Artificial Intelligence on Tuesday the 11th of February, from 3-6pm, in the Kennedy Room of the John Henry Brookes building.
Please join us for an afternoon of drinks, networking, and learning about some of the pressing issues in Ethical AI through a series of introductory talks.

The Institute for Ethical Artificial Intelligence at Oxford Brookes University promotes the ethical development and deployment of Artificial Intelligence technologies. Artificial Intelligence technology is becoming deeply embedded into the fabric of society and is having wide-ranging impact on many people's work and private lives. This has given rise to considerable concern over the ethical risks that the technology presents. At Oxford Brookes University we are addressing the key ethical and business risks faced by many organisations as they seek to integrate Artificial Intelligence into their workflows. This includes compliance with the law, mitigating bias in algorithms, and the regulation and validation of AI systems. We offer practical support from our technical, business and legal experts to ensure that Artificial Intelligence delivers benefit to industry and society as a whole.

To reserve a place, please register via the event page.

Prof Fabio Cuzzolin has been invited to serve as Steering Committee member for the newly founded Institute.

December 4 2019

Gurkirt successfully defends his PhD!

Dr Gurkirt Singh has successfully defended his PhD thesis at his viva, held on December 4 2019. The internal examiner was Dr Fridolin Wild, Senior Research Fellow at Oxford Brookes University, whereas the external examiner was Prof Andrea Vedaldi, Associate Professor at the Department of Engineering Sciences of Oxford University.

The title of Gurkirt's work is Online Spatiotemporal Action Detection and Prediction via Causal Representations .

Congratulations to Gurkirt for his excellent PhD work!

November 2019

Area Chair for ECCV 2020

Fabio has been invited to act as Area Chair for the upcoming European Conference on Computer Vision (ECCV 2020), which will be held in Glasgow, August 23-28 2019.

The European Conference on Computer Vision is the top European conference in the image analysis area, and one of the top three computer vision venues, together with ICCV and CVPR. ECCV 2020 will be a four day single-track conference, with additional activities: over 1000 posters, workshops, tutorials, and an industrial exhibition. The conference will present high quality previously unpublished research on many aspects of computer vision.

Important dates:
  • Paper sumission deadline: 5 March 2020;
  • Rebuttal Period: 21 - 27 May 2020;
  • Decisions to Authors: 3 July 2020;
  • Final Version Deadline: 17 July 2020;
  • Conference Dates: 23-28 August 2020.
November 13 2019

India government minister praises Dinesh's doctoral work

Our SARAS research fellow Dinesh Jackson's doctoral thesis on "Tuberculosis Recognition System using Deep Learning Techniques" was publicly praised by Smriti Irani, the India Union minister for women & child development and textiles, speaking at the annual convocation of VIT (Vellore Institute of Technology), India.

Speaking at the annual convocation of VIT (Vellore Institute of Technology), Chennai, Smriti said already students of VIT appeared to garner a sizeable chunk of AI in healthcare system. Specifically mentioning the theses of the students: Jeevakala in computer aided diagnosis system, and Jackson Samuel in TB recognition system, the Minister said she was keen on reading these papers.

A link to a news article can be found here:
December 3 2019

Dinesh and Mohamed have joined the Lab

Two new members of staff, Dr Dinesh Jackson Samuel Ravindran Charles and Dr Mohamed Ibrahim Mohamed have joined the Laboratory, as part of the SARAS Horizon 2020 project.

Dinesh completed his PhD studies at Vellore Institute of Technology, Chennai, India. His doctoral dissertation was about the development of a “Cybernetic Tuberculosis (TB) Detection System using Deep Learning Techniques”, to assist technicians in high disease prevalent areas. As part of this research, Dinesh has designed and developed a programmable microscopic stage to automate microscopic examination, which mitigates the reliance on skilled technicians. He has been working as Teaching cum Research Assistant in the Vellore Institute of Technology University from 2014.

Before joining the Lab, Mohamed was working as a Senior Computer Vision Engineer at Huawei Technologies, where he was responsible for designing and architecting computer vision projects into Android phones using Python, Tensorflow and Keras in Linux environment.
Mohamed obtained his PhD in Electrical Engineering in June 2016 from Staffordshire University. His dissertation focused on using machine learning techniques to design real-time event detection algorithms that work robustly in sensor nodes and the development of an intelligent adaptive data reduction algorithm based on Markov Decision Processes (MDPs).

September 17 2019

SARAS's mid-term review was a success!

Check out the SARAS web site here for more up-to-date news.
August 17 2019:

The papers

S. Olivastri, G. Singh and F. Cuzzolin, End-to-End Video Captioning

Link to preprint

G. Singh and F. Cuzzolin, Recurrent Convolutions for Causal 3D CNNs

Link to preprint

were accepted for publication at the First International Workshop on Large Scale Holistic Video Understanding at ICCV 2019, Seoul, South Korea, October 2019.

The International Conference on Computer Vision (ICCV) is the premiere international venue in the field of computer vision. The main objective of the workshop is to establish a video benchmark integrating joint recognition of all the semantic concepts, as a single class label per task is often not sufficient to describe the holistic content of a video. The planned panel discussion with world’s leading experts on this problem will be a fruitful input and source of ideas for all participants.

August 13 2019

Venus has arrived!

Venus, our new SCAN-built 8-GPU workstation endowed with 8 RTX 192GB cards has arrived.

Venus will join our existing machines, Mercury, Mars, Sun and Jupiter, and will constitute the backbone of the computing resources of the Laboratory, practically doubling our processing power and allowing the processing of video clips containing 64 video frames. This will be crucial to allow us to not just match but also outoperform the existing state of the art in action classification and detection.
August 2019

New SARAS postdocs have joined the Lab

Two new members of staff, Prof Inna Skarha-Bandurova and Dr Vivek Singh have just joined the Laboratory, as part of the SARAS Horizon 2020 project.

Inna was previously the Head of the Computer Science and Engineering (CSE) Department, V. Dahl East Ukrainian National University (EUNU), Severodonetsk, Ukraine. She is the author of more than 150 scientific publications, 3 books, 10 academic courses, 44 teaching and learning materials. She has extensive experience working on local and international research projects since 2002, and extensive knowledge and practical experience in different areas of AI, including expert systems, decision support techniques, machine learning.

Vivek was previously a research consultant with Softonics IT Services, NOIDA, and teaching associate at Thapar Institute of Engineering and Technology, Patiala. His research mainly focuses on structural components of deep learning algorithms to enhance their modeling capacity and to overcome their inherent limitations. He also worked on different application of computer vision, deep learning and machine learning algorithms in vision and natural language-based systems.

August 2019

Visit of Professor Ahmad Osman

Professor Ahmad Osman from the Fraunhofer Institute and the Saarland University of Applied Sciences, Germany, is visiting the Visual Artificial Intelligence Laboratory and the School of Engineering, Computing and Mathematics.
Ahmad will stay with us for the month of August.

Ahmad Osman is a Professor for Inspection Technologies and Signal and Image Processing at the Fraunhofer Institute for Nondestructive Testing (IZFP), and the Leader of the AutomaTiQ research group. His research interests span industrial applications of machine learning, sensor fusion in the framework of evidence theory, signal and image processing with a focus on object and defect detection, nondestructive testing methods, quality control and driver assistance systems.

The visit is key to pave the way to a wider collaboration between Oxford Brookes University, the Saarland University of Applied Sciences and the Fraunhofer Institute, covering a number of aspects:
  • A joint Horizon 2020 application to the upcoming i4MS call, deadline November 18 2019, led by KU Leuven;
  • The possibility of establishing a permanent exchange of MSc student in the framework of the Erasmus+ scheme;
  • Joint research in the field of decision making under uncertainty, but also autonomous driving and visual inspection;
  • Finally, the possible joint application to the Marie Curie programme, in collaboration with INSA-Lyon and other partners.

August 5 2019

ICCV 2019 Best Reviewer Award!

PhD student Gurkirt Singh has received a Best Reviewer award by ICCV 2019, the International Conference on Computer Vision, and top venue in the field of computer vision. The award recognises Gurkirt's outstanding work in assessing other scientists' work in a fair and accurate manner.

Congratulations to Guru!

July 17 2019

Third place in the 2019 Formula Student - AI competition

We are absolutely delighted to announce that our brand new Autonomous Formula Student team successfully completed 10 laps of the circuit with a fully self-driving car in the 'Track Drive' dynamic event, coming 3rd place overall. This represents an enormous success in pulling together computer vision, localisation, path planning, control strategies and overall integration of the system to the vehicle.

Of particular note are the following highlights:

  • 1st place in 'Real World Autonomous Driving' presentation
    In a collaboration between the Autonomous Driving research group and the Visual Artificial Intelligence Laboratory, the team delivered an incredibly impressive presentation of real autonomous driving challenges which are the subject of current research at OBU - leading them to score full marks in this element of the competition.
    Thanks to Fabio Cuzzolin, Reza Javanmard, Gurkirt Singh, Peter Ball, Mattias Rolf, Muhammad Hilmi Kamarudin and Gokhan Budan for all your help and support - the trophy belongs to you too!

  • 2nd place in 'Business Plan' event
    Working closely with Oxfordshire County Council and the MAAS:CAV consortium to develop a real business proposal (which we plan to use as a feasibility study in an upcoming research grant application) they achieved an impressive 2nd place in the Business Plan competition - narrowly missing out on first place by less than 1 point!

  • 2nd place in 'Design' competition
    Presenting their autonomous driving software and designs to a panel of industry experts, the team were praised for their innovative ideas, detailed explanations and impressive presentation. This resulted in them being invited by one of the judges to present their work to the staff at RoboRace - the 'Autonomous Formula 1'.
Huge congratulations go to Petar Georgiev and the team of students who made all this happen - We could not be more proud of you!

Formula Student - Artificial Intelligence


July 2 2019:

Fabio was awarded a Leverhulme Trust Research Project Grant, for a project entitled Theory of mind at the interface of neuroscience and AI, in partnership with Professor Barbara Sahakian, Department of Psychiatry, University of Cambridge.

The project will last 30 months, for a total budget of 273,000 pounds, which will be used to hire two postdoctoral research assistants at Oxford Brookes and Cambridge University.

Emerging applications of artificial intelligence are highlighting the limitations of established approaches in situations involving humans. The integration of neuroscience and machine learning has the potential to enable significant advances in both fields. Theory of Mind capabilities, i.e., the ability to 'read' other sentient beings' mental states, are crucial for the development of a next generation, "human-centric" artificial intelligence aimed to understand the behaviour of complex agents. In a mutually beneficial process, computational models developed within artificial intelligence could provide new insights about how these mechanisms work in the human brain.

June 2019

Brookes climbs 8 spots to #33 in the Guardian university guide 2020

May 2019

A new KTP Associate

Dr Neha Bhargava has joined the Lab as the next Associate funded by the Knowledge Transfer Partnership with Createc Technologies and Neha will stay with us for at least two years, until April 2021.

Before joining the Lab, Neha completed her PhD at the Vision and Image Processing Lab of the Indian Institute of Technology (IIT) Bombay, under the supervision of Professor Subhasis Chaudhuri.
Neha conducted her PhD on the topic of understanding crowd behaviour. The purpose of her thesis was to analyse crowd motion at various levels of granularity: Individual, Group and Collective. To tackle the issue she proposed a unified framework for identifying the groups and the activities performed at each level.
As part of this project, Neha will work towards revolutionising sports analytics, and further progress on her previous work on crowd behaviour understanding using (multimodal) deep learning.

April 12 2019:

The paper Evidence Combination Based on Credal Belief Redistribution for Pattern Classification, co-authored by Prof Fabio Cuzzolin, is accepted for publication by the IEEE Transactions on Fuzzy Systems, one of the top CS journals by impact factor (currently 8.415).

Evidence theory, also called belief functions theory, provides an efficient tool to represent and combine uncertain information for pattern classification. Evidence combination can be interpreted, in some applications, as classifier fusion. The sources of evidence corresponding to multiple classifiers usually exhibit different classification qualities, and they are often discounted using different weights before combination. In order to achieve the best possible fusion performance, a new Credal Belief Redistribution (CBR) method is proposed to revise such evidence. The rationale of CBR consists in transferring belief from one class not just to other classes but also to the associated disjunctions of classes (i.e., meta-classes). As classification accuracy for different objects in a given classifier can also vary, the evidence is revised according to prior knowledge mined from its training neighbors. If the selected neighbors are relatively close to the evidence, a large amount of belief will be discounted for redistribution. Otherwise, only a small fraction of belief will enter the redistribution procedure. An imprecision matrix estimated based on these neighbors is employed to specifically redistribute the discounted beliefs. This matrix expresses the likelihood of misclassification (i.e., the probability of a test pattern belonging to a class different from the one assigned to it by the classifier). In CBR, the discounted beliefs are divided into two parts. One part is transferred between singleton classes, whereas the other is cautiously committed to the associated meta-classes. By doing this, one can efficiently reduce the chance of misclassification by modeling partial imprecision. The multiple revised pieces of evidence are finally combined by Dempster-Shafer rule to reduce uncertainty and further improve classification accuracy. The effectiveness of CBR is extensively validated on several real datasets from the UCI repository, and critically compared with that of other related fusion methods.

Paper preprint PDF

March 13, 2019

UKIERI project funded

The Visual AI Laboratory has secured, in partnership with the Indian Institute of Technology (IIT) Bombay funding from UKIERI (the UK-India Education and Research Initiative) for a project on "Analysis of Human Action in Unconstrained Videos". IIT Bombay Director Subhasis Chaudhuri will lead the Indian side of the effort.

Human action detection and recognition from videos are two of the most challenging tasks in computer vision. These problems become even more severe while dealing with fine-grained action categories. An exploration of the evolution of salient bodyparts’ (local motion) is needed in this respect to better discriminate such similar-looking human activities. Dominant action detection paradigms work by locating actions of interest on a frame by frame basis, and linking them up in time to form ‘action tubes’. Moreover, given the vast category of possible actions, it is very hard to annotate labelled training videos in a cost-effective manner. The notion of ‘zero-shot’ classification, which we explain below, can be adopted in such situations for the categorization of previously unexplored human activities. In this perspective, we propose in this project to explore the notion of mid-level feature mining from video data for the sake of:

Human action detection and recognition from videos are two of the most challenging tasks in computer vision. These problems become even more severe while dealing with fine-grained action categories. An exploration of the evolution of salient bodyparts’ (local motion) is needed in this respect to better discriminate such similar-looking human activities. Dominant action detection paradigms work by locating actions of interest on a frame by frame basis, and linking them up in time to form ‘action tubes’. Moreover, given the vast category of possible actions, it is very hard to annotate labelled training videos in a cost-effective manner. The notion of ‘zero-shot’ classification, which we explain below, can be adopted in such situations for the categorization of previously unexplored human activities.

March 2019

Two new members of staff

Two new members of staff have joined the Laboratory.

Dr Reza Javanmard Alitappeh is the new Fellow in AI for Autonomous Driving funded by the School of Engineering, Computing and Mathematics.
Before joining the Lab, Reza was Assistant Professor at the University of Science and Technology of Mazandaran, Iran.
He has been appointed for two years to work on our proposal for decision making in autonomous driving based on endowing machines with theory of mind capabilities, and the validation of these notions in a simulated environment, in collaboration with Andrew Bradley's Autonomous Driving research group.
He will also take charge of the general effort in the area of autonomous driving in the School, and advise the work of the newly created Autonomous Driving Student Society.

Wojtek Buczynski is a PhD student based at Cambridge University, under the supervision of Professor Barbara Sahakian. Professor Cuzzolin has been invited to act as second supervisor on AI aspects. The topic of Wojtek's PhD will centre around the applicability of AI to portfolio allocation in the financial industry.
Wojtek is currently Senior Manager at Fidelity International. He completed his Master’s in Finance at the London Business School in 2011. He obtained his FRM designation in 2014 and a CFA designation in 2015. He am interested in artificial intelligence (AI), cutting-edge technology, FinTech and financial innovation, behavioural finance.

February 15, 2019

Promotion to Professor level 2

On February 15, 2019 the university’s Senior Academic Promotions Committee has considered and approved Prof Cuzzolin's application for promotion to Professor Level 2. The contract amendment was backdated to 1 September 2018.

Fabio would like to thank all external referees who were so kind as to support his application!

January 2019

Internship at Borealis AI, Vancouver

PhD student Gurkirt Singh has started a three-month internship in Vancouver at Borealis AI, a startup funded by Royal Bank of Canada, under the supervision of Professor Greg Mori. He will be working on graph neural networks for human-object interaction.

Borealis AI supports RBC’s innovation strategy through fundamental scientific study and exploration in machine learning theory and applications. The team aims to develop state-of-the-art and supports academic collaborations with world-class research centres in artificial intelligence.

January 2019

Invited talks at ICRA 2019 and the Hamlyn Symposium

Professor Cuzzolin has been invited to speak at the upcoming ICRA 2019 (the International Conference on Robotics and Automation) Workshop: Next Generation Surgery: Seamless integration of Robotics, Machine Learning and Knowledge Representation within the operating rooms .

The use of surgical robots has -beyond doubt- led to advances and improvements in surgery. The next significant forward leap is expected with the introduction of intelligent systems that can operate autonomously, or semi-autonomously in cooperation with the surgeons. In this quest of intelligence, growing synergies from diverse scientific branches have emerged. These include the areas of machine learning, knowledge representation, perceptual interfaces, as well as new robotic concepts and methodologies able to accommodate this ever-increasing body of scientific research. Outside academic research settings, evidence of this exponential growth can also be witnessed in the significant investment committed by commercial surgical robot developers and manufacturers. New high-tech companies and start-ups are also emerging at an increasing rate. The aim of this workshop is to explore the next generation of robotic surgery from different and diverse angles. One aspect concentrates on the most innovative technologies and advances in the fields of robotics, machine learning, artificial intelligence and knowledge representation. A second aspect focus on international scientific projects presented as motivating case studies. Importantly, the industrial point-of-view is accommodated in a “reality testing” role, regarding the current level of adoption of scientific research in the field and future potential.

Professor Cuzzolin has also been invited to speak at the Hamlyn Symposium Workshop: “Towards robotic autonomy in surgery”, London, June 23 2019.

Dexterity and perception capabilities of surgical robots may soon be enhanced by cognitive functions that can support surgeons in decision making and performance monitoring, and enhance surgical quality.
However, the basic elements of autonomy are not well understood and their mutual interaction is unexplored. Current classification of autonomy encompasses six basic levels: Level 0: no autonomy;Level 1: robot assistance; Level 2: task autonomy; Level 3: conditional autonomy; Level 4: high autonomy. Level 5: full autonomy.

October 2018

Paper at ACCV 2018

A paper was accepted for publication at ACCV 2018 (the Asian Conference on Computer Vision), entitled "TraMNet - Transition Matrix Network for Efficient Action Tube Proposals" by Gurkirt Singh, Suman Saha, and Fabio Cuzzolin.

Current state-of-the-art methods solve spatio-temporal action localisation by extending 2D anchors to 3D-cuboid proposals on stacks of frames, to generate sets of temporally connected bounding boxes called action micro-tubes. However, they fail to consider that the underlying anchor proposal hypotheses should also move (transition) from frame to frame, as the actor or the camera do. Assuming we evaluate n 2D anchors in each frame, then the number of possible transitions from each 2D anchor to he next, for a sequence of f consecutive frames, is in the order of O(n^f), expensive even for small values of f. To avoid this problem we introduce a Transition-Matrix-based Network (TraMNet) which relies on computing transition probabilities between anchor proposals while maximising their overlap with ground truth bounding boxes across frames, and enforcing sparsity via a transition threshold. As the resulting transition matrix is sparse and stochastic, this reduces the proposal hypothesis search space from O(nf ) to the cardinality of the thresholded matrix. At training time, transitions are specific to cell locations of the feature maps, so that a sparse (efficient) transition matrix is used to train the network. At test time, a denser transition matrix can be obtained either by decreasing the threshold or by adding to it all the relative transitions originating from any cell location, allowing the network to handle transitions in the test data that might not have been present in the training data, and making detection translation-invariant. Finally, we show that our network is able to handle sparse annotations such as those available in the DALY dataset, while allowing for both dense (accurate) or sparse (efficient) evaluation within a single model. We report extensive experiments on the DALY, UCF101-24 and Transformed-UCF101-24 datasets to support our claims.

The PDF version of the paper can be found: here.

September 2018

New edited volume with Springer

This book constitutes the refereed proceedings of the 5th International Conference on Belief Functions, BELIEF 2018, held in Compiègne, France, in September 2018. The 33 revised regular papers presented in this book were carefully selected and reviewed from 73 submissions. Papers were solicited on theoretical aspects (including for example statistical inference, mathematical foundations, continuous belief functions) as well as on applications in various areas including classification, statistics, data fusion, network analysis and intelligent vehicles.

September 2018

Associate Editorship of the International Journal of Approximate Reasoning

Professor Cuzzolin has accepted an Associate Editor position with the International Journal of Approximate Reasoning.

The International Journal of Approximate Reasoning is intended to serve as a forum for the treatment of imprecision and uncertainty in Artificial and Computational Intelligence, covering both the foundations of uncertainty theories, and the design of intelligent systems for scientific and engineering applications. It publishes high-quality research papers describing theoretical developments or innovative applications, as well as review articles on topics of general interest.
Relevant topics include, but are not limited to, probabilistic reasoning and Bayesian networks, imprecise probabilities, random sets, belief functions (Dempster-Shafer theory), possibility theory, fuzzy sets, rough sets, decision theory, non-additive measures and integrals, qualitative reasoning about uncertainty, comparative probability orderings, game-theoretic probability, default reasoning, nonstandard logics, argumentation systems, inconsistency tolerant reasoning, elicitation techniques, philosophical foundations and psychological models of uncertain reasoning.

The journal is affiliated with the Society for Imprecise Probability: Theories and Applications (SIPTA), and Belief Functions and Applications Society (BFAS).
The Editor-in-Chief is Professor Thierry Denoeux. The 2017 impact factor of IJAR is 1.766.

September 2018

Board position for a new Huawei-SFU research centre

Professor Cuzzolin has started a new position as Executive Committee member for the new Huawei - Simon Fraser University research centre in Vancouver, Canada.

August 2018

New Research Fellow in Artificial Intelligence for Autonomous Driving

The Visual AI Laboratory, in partnership with Dr Matthias Rolf of the Cognitive Robotics group and the Autonomous Driving group led by Dr Andrew Bradley, has secured funding for £100,000 from the the School of Engineering, Computing and Mathematics to support a Research Fellow in Artificial Intelligence for Autonomous Driving, for a period of two years.

The project concerns the design and development of novel ways for robots and autonomous machines to interact with humans in a variety of emerging scenarios, including: human-robot interaction, autonomous driving, personal (virtual or robotic) assistants. In particular, we believe novel, disruptive applications of AI require much more sophisticated forms of communication between humans and machines, something that goes far beyond conventional explicit and linguistic exchange of information towards implicit non-verbal communication and understanding of each other's behaviour.
For example, smart cars need to understand that children and construction workers have different reasoning processes that lead to very different observable behaviour, in order to blend in with the road as a human-centered environment. Empathic machines have the potential to revolutionise healthcare, by providing better care catering for the psychological needs of patients. Morally and socially appropriate behaviour is key in all such scenarios, to build trust and lead to acceptance from the public.
Exciting research is currently going on in moral robotics and AI, including moral development (how a robot can learn moral principles), fairness and bias in, for instance, AI-assisted recruitment. As smart cars head towards real world deployment, the field is shifting from mere perception (e.g. SLAM) to higher-level cognition tasks, starting from the automated detection of road events. Holographic AI is going to revolutionise the field of personal assistants, but needs effective communication interfaces.

Cuzzolin is exploring the design and implementation of a machine theory of mind model based on a simulation approach, in which input stimuli drive an agent-specific simulation of their mental states. Simulations are implemented as reconfigurable deep neural networks, learned by reinforcement learning. Closely related to this, Rolf is investigating socially-originated rewards for reinforcement learning, including pre-linguistic cues such as face detection, synchrony and contingency, as well as investigating robotic moral issues. Both research directions are directly applicable to autonomous driving – the Visual AI Lab is currently providing road event and agent activity annotation for the Oxford RobotCar dataset which is bound to have a significant impact on the field, as the first such benchmark. The benchmark will be released in October 2018. In the first year of the project, the Fellow would implement reinforcement learning based machine theory of mind models and test them on the new data to provide a proof of concept. Bradley has been working in the area of vehicle simulation for many years, and also upon driver behaviour analysis using a driving simulator (with Prof Helen Dawes). Bradley is currently working with Dr Peter Ball on areas of modelling autonomous vehicle behaviour, resulting in a recent Innovate UK application for Connected and Autonomous Vehicle (CAV) simulation.

A Research Fellow Grade 8 position (starting salary: £30,688) will be advertised as soon as September 2018.

August 2018

New Knowledge Transfer Partnership with Createc and Sportslate

A Knowledge Transfer Partnership (KTP) with Createc and Sportslate, two successful spinoffs of Oxford University, was funded at the lates round by Innovate UK.

The project is split into two key phases each taking approximately 12 months, aiming to demonstrate a simple proof of concept at the mid-point with the second year focused on maturation, refinement and steps to commercialisation. The first phase will consist of the Associate reviewing the state of the art and conducting a literature review, understanding the hardware and system architecture and capturing further datasets for algorithmic training, in addition to the following technical work packages:
  1. Sensor fusion: The company's system provides not only video imagery from multiple viewpoints but also data providing depth, dynamic data and point cloud overlays over the imagery. This enables a novel approach to action identification where this extra information can be integrated with the video to enhance performance
  2. Person segmentation: The first task ahead of person or action identification is to segment the person from the background which due to the tracking system is highly dynamic. This is a key enabling task but there are multiple existing techniques for performing this task
  3. Person identification: It is important for all applications to associate an action with an individual. In the crowd monitoring case, single actions may be inconsequential but an individual carrying out multiple actions may be of more interest
  4. Single person action identification: This task will develop algorithms for identifying single person actions from the video data
These will be integrated for a proof of concept demonstration in month 13. The second phase of the work will integrate the algorithms with real customer datasets and other datasets held by Createc, enabling testing of the algorithms under a wide range of conditions. Inevitably this will lead to algorithm refinement. This work is important to demonstrate that the approaches can be used commercially with real data, therefore de-risking commercial exploitation beyond this project. Technically this phase will also include extension of the single person action identification to multi-people events, and for the system to understand these links.
Towards the end of the project, the algorithms and capabilities will be marketed to prospective customers, and the Associate will work on development of marketing material, videos and academic papers/presentations to raise the profile of the work.

A KTP Associate position will be advertised as soon as September 2018. Salary will be in the range 30,000 - 35,000 per annum.

July 30 2018

Paper at ECCV 2018 Workshop on Anticipating Human Behaviour

A paper was accepted for publication at the ECCV 2018 (the European Computer Vision Conference) AHB Workshop, entitled "Predicting Action Tubes" by Gurkirt Singh, Suman Saha, and Fabio Cuzzolin.

The purpose of this workshop is to discuss recent approaches that anticipate human behavior from video or other sensor data, to bring together researchers from multiple fields and perspectives, and to discuss major research problems and opportunities and how we should coordinate efforts to advance the field.

In this work, we present a method to predict an entire ‘action tube’ (a set of temporally linked bounding boxes) in a trimmed video just by observing a smaller subset of it. Predicting where an action is going to take place in the near future is essential to many computer vision based applications such as autonomous driving or surgical robotics. Importantly, it has to be done in realtime and in an online fashion. We propose a Tube Prediction network (TPnet) which jointly predicts the past, present and future bounding boxes along with their action classification scores. At test time TPnet is used in a (temporal) sliding window setting, and its predictions are put into a tube estimation framework to construct/predict the video long action tubes not only for the observed part of the video but also for the unobserved part. Additionally, the proposed action tube predictor helps in completing action tubes for unobserved segments of the video. We quantitatively demonstrate the latter ability, and the fact that TPnet improves state-of-the-art detection performance, on one of the standard action detection benchmarks - J-HMDB-21 dataset.

The PDF version of the paper can be found: here.

Summer 2018

The 3,600 mile experiment: Parkinson's disease on the ocean

The Visual AI Lab is a partner in the ongoing Parkinson's row in the Indian Ocean.

A crew are rowing across the Indian Ocean to shake up our understanding of Parkinson's disease—and break a world record while they're at it. For people with Parkinson's disease, exercise is prescribed to treat the symptoms most commonly associated with the condition. The muscle tremors, cramps and gait issues that characterise the disease appear to be mitigated with physical activity. Anecdotally, we know that endurance activities appear to be more beneficial for these physical symptoms, lessening the need for medication. But that's about as far as our understanding goes of the relationship between physical activity and Parkinson's disease. For instance, exercise doesn't seem to ward off the other, less visible symptoms of the disease in the same way and we don't know why. Fatigue, one of Parkinson's most disabling symptoms, appears to persist with sufferers even if they exercise. Why does one set of symptoms improve but not the other? Is endurance exercise key in that more is always better? Does endurance exercise affect Parkinson's sufferers differently to healthy people? What better way to answer these questions than to row a boat for 65 days straight, all the way from West Australia to Mauritius?

Robin Buttery, Barry Hayes, James Plumley and skipper Billy Taylor are planning on rowing across the Indian Ocean. Robin was diagnosed with young onset Parkinson's disease 2 years ago, just before his 44th birthday. Determined to show that life doesn't stop with his diagnosis, he's taken on the formidable challenge of rowing 2 hours on, 2 hours off for 12 weeks straight. Whilst it's marketed as an attempt to beat the world record, the row will hopefully serve another purpose. Behind the scenes of this international expedition are Professor Helen Dawes, Professor Fabio Cuzzolin and Dr. Johnny Collett of Oxford Brookes University in the UK. For them, the row is a scientific experiment, and the crew are their lab rats.

The event was recently covered in the following media pieces: “The 3,600 mile experiment: Parkinson's disease on the ocean” – MedicalXpress, June 25 2018; “Row for Parkinson’s” – The West Australian, 7 July 2018; “British Crew Rowing the Distance to Improve Understanding of Parkinson’s Disease”, June 27 2018.

May 2018

New paper at BMVC 2018

A paper was accepted for publication at BMVC 2018, the British Machine Vision Conference, entitled "Incremental Tube Construction for Human Action Detection" by Harkirat Behl, Michael Sapienza, Gurkirt Singh, Suman Saha, Fabio Cuzzolin and Philip H. S. Torr, a joint work with Oxford University's Torr Vision Group.

The British Machine Vision Conference (BMVC) is the British Machine Vision Association (BMVA) annual conference on machine vision, image processing, and pattern recognition. It is one of the major international conferences on computer vision and related areas held in the UK. As its increasing popularity and quality, it has established as a prestigious event on the vision calendar.

Current state-of-the-art action detection systems are tailored for offline batch-processing applications. However, for online applications like human-robot interaction, current systems fall short. In this work, we introduce a real-time and online joint-labelling and association algorithm for action detection that can incrementally construct space-time action tubes on the most challenging untrimmed action videos in which different action categories occur concurrently. In contrast to previous methods, we solve the linking, action labelling and temporal localization problems jointly in a single pass. Our online algorithm outperforms the current state-of-the-art offline and online systems in terms of accuracy with a margin of 16% in mAP, and in terms of speed (1.8ms per frame). We further demonstrate that the entire action detection pipeline can easily be made to work effectively in real-time using our action tube construction algorithm.

The PDF version of the paper can be found: here.

July 10 2018

Invited talk at COSUR 2018

Fabio was invited to speak at the upcoming COSUR 2018 Summer School on Surgical Robotics.

The main objective of COSUR 2018 is to introduce PhD students and Post-Doctoral fellows to the multidisciplinary research field of surgical robotics, with particular focus on the control algorithms used in robotic surgery and the impact of cognition in directing the control. We will offer lectures, hands-on laboratory experience, and opportunity for informal interaction with clinicians and leading experts from academia and industry. The school will go beyond the current approach of doctoral schools and will give trainees an in depth understanding of cognition and control in robotic surgery.

June 2018

Brookes rises by 9 places in the Guardian university guide 2019

May 2018

Two papers accepted at BELIEF 2018

Two papers were accepted for publication at the joint SMPS-BELIEF 2018 International Conference, entitled "General geometry of belief function combination" and "Generalised max entropy classifiers".

The BELIEF and SMPS conferences are biennial events concerning the modeling of uncertainty. The BELIEF conferences are sponsored by the Belief Functions and Applications Society (BFAS) and are focused on the theory of belief functions, while the scope of SMPS covers the application of all approaches to uncertainty (including fuzzy and rough sets, imprecise probabilities, etc.) to statistics and data analysis. The co-location of the two events is intended to favor cross-fertilization among researchers active in both communities.

General geometry of belief function combination: PDF version.
Generalised max entropy classifiers: PDF version.

June 1 2018

Invited tutorial at Seoul National University

Prof Cuzzolin was invited to give a tutorial on "Belief functions: A gentle introduction" at the Department of Statistics of Seoul National University, the top Korean university.

The event was organised by Associate Teaching Professor Hyeyoung Jung. Tutorial slides are available here.

May 2018

Three new visitors joining the Lab!

Three new visitors have joined the Laboratory in May.
Valentina Fontana is an MSc student from University Federico II in Naples, and is part of as an Erasmus+ exchange programme with the local IDEAinVR lab led by Prof Giuseppe Di Gironimo. Valentina will stay with us until September, and work on a dissertation on recognising complex road events for autonomous driving.

Silvio Olivastri is a Visiting Researcher from AI Labs, Bologna, Italy. AI Labs is seeking a longer term partnership with the Visual AI Lab. Silvio will work on the deep video captioning project started by former postdoc Ruomei Yan.

Santanu Rathod is a second year student from IIT Bombay, part of an exchange programme between Brookes' ECM School and IIT. Santanu will work on deep predicting future actions, over a period of three months.

May 5-9, 2018

The Fifth Bayesian, Fiducial and Frequentist conference (BFF 5)

Fabio has been invited to speak at the latest edition of the Bayesian, fiducial and frequentist (BFF) series of statistical conferences.

The BFF series began in 2014 with the goals of facilitating the exchange of research developments in Bayesian, fiducial and frequentist (BFF) methodology, to bridge gaps among the different statistical paradigms, stimulate collaborations, and foster opportunities for involvement of new researchers. Over the last four years, these meetings have served as a platform for comparing and connecting methods and theory from the differing, yet related, BFF perspectives.

The 2018 BFF5 will focus on the theme of “Foundations of Data Science”. Invited talks (with the length of 30 minutes) are encouraged to be aligned with re-examining the role and reporting new advances on the foundations of statistical inference in this new era of data science. This year, we are also offering short courses on fiducial statistics and confidence distributions on Sunday, May 6, followed by the main conference on May 7-9. The short courses will prepare conference attendees to better participate in the scientific programs of the main conference. A conference banquet is planned for the evening of Monday, May 7. Dr. Glenn Shafer will be the banquet speaker.

Conference announcement

January 24 2018:

Towards machines that can read your mind, Professorial lecture, Brookes Open LEcture Series.

Professor Fabio Cuzzolin explores how intelligent machines can negotiate a complex world, fraught with uncertainty. To enable machines to deal with situations they have never encountered in the safest possible way. Interacting naturally with human beings and their complex environments will only be possible if machines are able to put themselves in people’s shoes: to guess their goals, beliefs and intentions – in other words, to read our minds.
Fabio explains just how machines can be provided with this mind-reading ability.

Watch it on Facebook here:

Watch it with slides on the Brookes Open Lecture series web site:

PDF slides are available here.

August 8 2017:

Fabio was awarded the Horizon 2020 project "SARAS - Smart Autonomous Robotic Assistant Surgeon", on the development of robotic assistant surgeons for laparoscopy.

The team will be in charge of the vision and cognitive modules of the system. The project has a total budget of €4,315,640: Oxford Brookes' share is €596,073. The project's duration is of 3 years. The agreed start date is Mar 1st 2018. The Coordinator is Dr Riccardo Muradore from University of Verona, Italy. Fabio's role will be Scientific Officer (SO) for the whole project, as well as WP Leader.

List of Horizon 2020 projects funded in 2017

In surgical operations many people crowd the area around the operating table. The introduction of robotics in surgery has not decreased this number. During a laparoscopic intervention with the da Vinci robot, for example, the presence of an assistant surgeon, two nurses and an anaesthetist, is required, together with that of the main surgeon teleoperating the robot. The assistant surgeon needs always be present to take care of simple surgical tasks the main surgeon cannot perform with the robotic tools s/he is teleoperating (e.g. suction and aspiration during dissection, moving or holding organs in place to make room for cutting or suturing, using the standard laparoscopic tools). Another expert surgeon is thus required to play the role of the assistant, to properly support the main surgeon using traditional laparoscopic tools as shown in Figure 1.

The goal of SARAS is to develop a next-generation surgical robotic platform that allows a single surgeon (i.e., without the need for an expert assistant surgeon) to execute robotic minimally invasive surgery (R-MIS), thereby increasing the social and economic efficiency of a hospital while guaranteeing the same level of safety for patients. This platform is called solo-surgeon system.

July 24 2017:

The Artificial Intelligence and Vision team, led by PhD student Gurkirt Singh, in partnership with Andreas Lehrmann and Leonid Sigal of Disney Research, has won second place in the latest CVPR2017 Charades Activity Challenge for action recognition, behind DeepMind's TeamKinetics led by Andrew Zisserman, third place for temporal detection. Leaderboard

The Charades Activity Challenge aims towards automatic understanding of daily activities, by providing realistic videos of people doing everyday activities. The Charades dataset is collected for an unique insight into daily tasks such as drinking coffee, putting on shoes while sitting in a chair, or snuggling with a blanket on the couch while watching something on a laptop. This enables computer vision algorithms to learn from real and diverse examples of our daily dynamic scenarios. The challenge consists of two separate tracks: classification and localization track. The classification track is to recognize all activity categories for given videos ('Activity Classification'), where multiple overlapping activities can occur in each video. The localization track is to find the temporal locations of all activities in a video ('Activity Localization').

Method's description

At a high level, our approach consists of two parallel convolutional neural networks (CNNs) extracting static (i.e., independent) appearance and optical flow features for each frame, plus, there is another parallel audio feature extraction stream using Soundnet CNN and scoring done using an SVM. We fuse information from three streams using a convex combination of their respective classification scores to obtain a final result.
We train the overall network using a multi-task loss: (1) Classification: Both streams produce a C-dimensional softmax score vector that is trained using back-propagation with a cross-entropy loss; (2) Regression: In addition to the classification scores, the appearance stream also produces 3-dim. coefficients for each class describing the offset from the boundaries of the current action as well as its overall duration. This network path is trained using a smooth L1 loss.
The audio stream consists of feature extraction using pretrained soundet CNN and SVM classifier to produce classification in sliding window fashion. Audio scores are interpolated to the same frame as other two stream outputs.
We generate frame-level scores at 12 fps. For temporal action segmentation, we fuse the scores of three streams at the frame-level using a convex combination. The weights to each stream can be found by cross-validation on the validation set. Finally, we produce a score vector for 25 regularly sampled frames using top-k mean-pooling in a temporal window around those frames. Frame-level score for each class is the mean of the top-20 frame-level scores of class c in a temporal window of size 40. Similarly, we apply top-k mean pooling on the scores for class c for the entire duration of video to obtain video classification scores. We found that top-k value of 40 works well via cross-validation.

July 16 2017:

The papers

G. Singh, S. Saha, M. Sapienza, P. Torr and F. Cuzzolin, Online Real-time Multiple Spatiotemporal Action Localisation and Prediction

Link to arXiv version

S. Saha, G. Singh and F. Cuzzolin, AMTnet: Action-Micro-Tube regression by end-to-end trainable deep architecture

Link to arXiv version

were accepted for publication at the International Conference on Computer Vision (ICCV 2017), Venice, Italy, October 2017 - the premiere venue for Computer Vision, as part of the ongoing world-leading action detection project at the Artificial Intelligence and Vision group.

July 6 2017:

Fabio was invited to speak at the Fourth Summer School on Belief Functions and Their Applications (BELIEF 2017)

Title of the talk: The statistics of belief functions

Although born within the remit of mathematical statistics, the theory of belief functions has later evolved towards subjective interpretations which have distanced it from its mother field, and have drawn it nearer to artificial intelligence. The purpose of this talk, in its first part, is to understanding belief theory in the context of mathematical probability and its main interpretations, Bayesian and frequentist statistics, contrasting these three methodologies according to their treatment of uncertain data.
In the second part we recall the existing statistical views of belief function theory, due to the work by Dempster, Almond, Hummel and Landy, Zhang and Liu, Walley and Fine, among others.
Finally, we outline a research programme for the development of a fully-fledged theory of statistical inference with random sets. In particular, we discuss the notion of generalised lower and upper likelihoods, the formulation of a framework for logistic regression with belief functions, the generalisation of the classical total probability theorem to belief functions, the formulation of parametric models based of random sets, and the development of a theory of random variables and processes in which the underlying probability space is replaced by a random set space.

June 2017:

Fabio is elected Executive Editor of the Society for Imprecise Probability - Theory and Applications (SIPTA),

The Society for Imprecise Probability: Theories and Applications (SIPTA) was created in February 2002, with the aim of promoting the research on imprecise probability. This is done through a series of activities for bringing together researchers from different groups, creating resources for information, dissemination and documentation, and making other people aware of the potential of imprecise probability models.
The Society has its roots in the Imprecise Probabilities Project conceived in 1996 by Peter Walley and Gert de Cooman and its creation has been encouraged by the success of the ISIPTA conferences.
Imprecise probability is understood in a very wide sense. It is used as a generic term to cover all mathematical models which measure chance or uncertainty without sharp numerical probabilities. It includes both qualitative (comparative probability, partial preference orderings, …) and quantitative modes (interval probabilities, belief functions, upper and lower previsions, …). Imprecise probability models are needed in inference problems where the relevant information is scarce, vague or conflicting, and in decision problems where preferences may also be incomplete.

June 13 2017:

The paper The Total Belief Theorem, authored by Dr Chunlai Zhou and Professor Fabio Cuzzolin, is accepted for publication at Uncertainty in Artificial Intelligence (UAI) 2017

In this paper, motivated by the treatment of conditional constraints in the data association problem, we state and prove the generalisation of the law of total probability to belief functions, as finite random sets.
Our results apply to the case in which Dempster's conditioning is employed. We show that the solution to the resulting total belief problem is in general not unique, whereas it is unique when the a-priori belief function is Bayesian. Examples and case studies underpin the theoretical contributions.
Finally, our results are compared to previous related work on the generalisation of Jeffrey’s rule by Spies and Smets.

Paper submission PDF

October 2016:

Podcast with Risk Roundup: Advances in AI: Human/Non-Human Action and Gesture Recognition Prof. Fabio Cuzzolin, Head of Artificial Intelligence and Vision at Oxford Brookes University, Oxford, United Kingdom participates in Risk Roundup to discuss ''Advances in Artificial Intelligence: Human and Non-Human Gesture and Action Recognition''.

How would we define and describe man-machine or a machine-machine interface and why is it relevant to understanding Artificial Intelligence? Mediator between human (and non-human users) and machines, a man-machine or machine-machine interface, is basically a system that takes care of the entire human-non-human communication process. It is responsible for the delivery of the machine or computer knowledge, functionality and available information, in a way that is compatible with the end-user’s communication channels, be it human or non-human. It then translates the user’s (human or non-human) actions (user input) into a form (instructions/commands) that is understandable by a machine.

When increasingly complex Artificial Intelligence based systems, products and services are rapidly emerging across nations, the necessity for more user friendly man-machine or machine-machine interface is becoming increasingly necessary for their effective utilization, and consequently for the success that they were designed for.

Published on Risk Group:

October 2016:

Fabio has been invited to be a keynote speaker at CSA 2016, the The 2nd Conference on Computing Systems and Applications. The second edition of the Computing Systems and Applications (CSA) conference will take place from December 13 through December 14, 2016. The conference is open for researchers, academics and industry practitioners interested in the latest scientific and technological advances occurring in different fields of computer science. It constitutes a leading venue for students, researchers, academics and industrials to share their new ideas, original research findings and practical experiences across all computer science disciplines.

CSA 2016 will be held in the Ecole Militaire Polytechnique (EMP) located in Algiers; the capital and the largest city of Algeria. This pioneering engineering college is situated in Bordj El Bahri, a lively city lapped by the Mediterranean Sea and facing the well-known Algiers bay. EMP is one of the oldest technical schools for the training of highly-qualified academics in Algeria. Its know-how covers teaching and research activities in the fields of computer science, electrical and mechanical engineering, and chemistry.

Download the Call for Papers at

July 2016:

Fabio is promoted to Professor

July 14 2016:

invited seminar "Belief functions: past, present and future", part of the statistics colloquia at Harvard University, Department of Statistics.

The theory of belief functions, sometimes referred to as evidence theory or Dempster-Shafer theory, was first introduced by Arthur P. Dempster in the context of statistical inference, to be later developed by Glenn Shafer as a general framework for modelling epistemic uncertainty. Belief theory and the closely related random set theory form a natural framework for modelling situations in which data are missing or scarce: think of extremely rare events such as volcanic eruptions or power plant meltdowns, problems subject to huge uncertainties due to the number and complexity of the factors involved (e.g. climate change), but also the all-important issue with generalisation from small training sets in machine learning.

This short talk abstracted from an upcoming half-day tutorial at IJCAI 2016 is designed to introduce to non-experts the principles and rationale of random sets and belief function theory, review its rationale in the context of frequentist and Bayesian interpretations of probability but also in relationship with the other main approaches to non-additive probability, survey the key elements of the methodology and the most recent developments, discuss current trends in both its theory and applications. Finally, a research program for the future is outlined, which include a robustification of Vapnik' statistical learning theory for an Artificial Intelligence 'in the wild'.

Slides in PDF format

July 13 2016:

The paper Deep Learning for Detecting Multiple Space-Time Action Tubes in Videos, led by first author Suman Saha, was accepted for publication at BMVC 2016 Project web site

In this work we propose a new approach to the spatiotemporal localisation (detection) and classification of multiple concurrent actions within temporally untrimmed videos. Our framework is composed of three stages.
In stage 1, a cascade of deep region proposal and detection networks are employed to classify regions of each video frame potentially containing an action of interest. In stage 2, appearance and motion cues are combined by merging the detection boxes and softmax classification scores generated by the two cascades. In stage 3, sequences of detection boxes most likely to be associated with a single action instance, called {action tubes}, are constructed by solving two optimisation problems via dynamic programming.
While in the first pass action paths spanning the whole video are built by linking detection boxes over time using their class-specific scores and their spatial overlap, in the second pass temporal trimming is performed by ensuring label consistency for all constituting detection boxes.
We demonstrate the performance of our algorithm on the challenging UCF101, J-HMDB-21 and LIRIS-HARL datasets, achieving new state-of-the-art results across the board and significantly lower detection latency at test time.

Arxiv paper coming soon

July 1 2016:

The Artificial Intelligence and Vision research group, led by PhD student Gurkirt Singh, has won second place in the latest CVPR ActivityNet Large Scale Activity Detection Challenge. Leaderboard

The ActivityNet Large Scale Activity Recognition Challenge is a half-day workshop to be held on July 1 in conjuction with CVPR 2016, in Las Vegas, Nevada. In this workshop, we establish a new challenge to stimulate the computer vision community to develop new algorithms and techinques that improve the state-of-the-art in human activity understanding. The data of this challenge is based on the newly published ActivityNet benchmark.

The challenge focuses on recognizing high-level and goal oriented activities from user generated videos, similar to those found in internet portals. This challenge is tailored to 200 activity categories in two different tasks. (a) Untrimmed Classification Challenge: Given a long video, predict the labels of the activities present in the video; (b) Detection Challenge: Given a long video, predict the labels and temporal extents of the activities present in the video.

Report in PDF format

January 2016:

Fabio's tutorial "Belief functions for the working scientist" has been accepted for a half-day presentation at IJCAI 2016, the premiere international conference on Artificial Intelligence, which will take place at the Hilton Midtown Hotel, New York City, on July 9-15 2016.

A dedicated web site can be found HERE.

The theory of belief functions, sometimes referred to as evidence theory or Dempster-Shafer theory, was first introduced by Arthur P. Dempster in the context of statistical inference, and was later developed by Glenn Shafer as a general framework for modelling epistemic uncertainty. Belief theory and the closely related random set theory form a natural framework for modelling situations in which data are missing or scarce: think of extremely rare events such as volcanic eruptions or power plant meltdowns, problems subject to huge uncertainties due to the number and complexity of the factors involved (e.g. climate change), but also the all-important issue with generalisation from small training sets in machine learning.

This tutorial is designed to introduce the principles and rationale of random sets and belief function theory to the wider AI audience, survey the key elements of the methodology and the most recent developments, make AI practitioners aware of the set of tools that have been developed for reasoning in the belief function framework on real-world problems. Attendees will acquire first-hand knowledge of how to apply these tools to significant problems in major application fields such as computer vision, climate change, and others. The performance of these approaches will be critically compared with those of more classical regression, classification or estimation methods to highlight the advantage of modelling lack of data explicitly.

Februry 2015:

Fabio was invited at the Oxford Martin School workshop on "Artificial Intelligence and Predictive Modelling" with Garry Kasparov

Fabio was also invited to a private dinner with Garry and other distinguished guests at Balliol College.

When Garry Kasparov visited the Oxford Martin School this week, he came with a strong message about innovation: society has become too risk averse and we are at risk of failing to innovate if investor mindsets don’t change soon. During two lively workshops, the former World Chess Champion debated the future of innovation with 20 researchers from the University of Oxford, Oxford Brookes and industry. He also delivered a lecture to an audience of 440 at the University of Oxford’s Examination Schools. Top of Kasparov’s agenda was the issue of risk aversion and its impact on societal progress. “A fear of uncertainty holds us back from doing things quickly and productively,” he argued in his second workshop. “Just look the airline industry. Planes are getting better in terms of comfort and fuel efficiency but not going faster. Our preference is for comfort over speed. This mentality is reflected in many different areas; we have become a risk averse society.”

September 2014:

Fabio's monograph entitled "Visions of a Generalized Probability Theory" has been published by Lambert Academic Publishing

The theory of evidence (also known as ‘evidential reasoning’, ‘belief theory’ or ‘Dempster-Shafer theory’) is, perhaps, one of the most successful frameworks for uncertainty modelling, and arguably the most straightforward and intuitive approach to a generalized probability theory. Emerging in the late Sixties from a profound criticism of the more classical Bayesian theory of inference and modelling of uncertainty, evidential reasoning has stimulated in the last four decades an extensive discussion on the epistemic nature of both subjective ‘degrees of beliefs’ and frequentist ‘chances’.

Computer vision is a fast growing discipline whose ambitious goal is to equip machines with the intelligent visual skills humans and animals are provided by Nature, allowing them to interact effortlessly with complex and inherently uncertain environments. This Book shows how the fruitful interaction of computer vision and belief calculus is capable of stimulating significant advances in both fields. Novel results on the mathematics of belief functions are developed in response to the issues posed by fundamental vision problems to which, in turn, novel evidential solutions are proposed.

September 2014:

Springer's Lecture Notes in Artificial Intelligence Volume 8764 entitled Belief Functions: Theory and Applications, edited by Fabio, is available online.

Belief Functions: Theory and Applications
Third International Conference, BELIEF 2014, Oxford, UK, September 26-28, 2014. Proceedings
Series: Lecture Notes in Computer Science, Vol. 8764
Subseries: Lecture Notes in Artificial Intelligence
Cuzzolin, Fabio (Ed.)
2014, XVIII, 444 p. 92 illus.

This book constitutes the thoroughly refereed proceedings of the Third International Conference on Belief Functions, BELIEF 2014, held in Oxford, UK, in September 2014. The 47 revised full papers presented in this book were carefully selected and reviewed from 56 submissions. The papers are organized in topical sections on belief combination; machine learning; applications; theory; networks; information fusion; data association; and geometry.

September 26-28 2014:

The Third Edition of the International Conference on Belief Functions was successfully held in St. Hugh's college, Oxford.

BELIEF 2014, the third edition of the series of conferences on the theory and application of belief functions is already over, and it is time to sum up the outcomes of this exciting experience and draw some lessons for the future of the conference and the community at large.

November 2012:

Fabio's monograph on "The geometry of uncertainty" has been conditionally approved by Springer-Verlag's "Information Science and Statistics" series

The book is about the geometry of various mathematical descriptions of uncertainty, known as "imprecise probabilities", proposed in the last forty years as alternatives or competitors to classical probability theory. These objects can be seen as points living in a certain geometrical space: they can therefore be handled by geometric means. The book provides indeed a geometrical language for working with imprecise probabilities.

The reviewers commented that "there is no other book addressing the Dempster-Shafer theory of evidence in such exhaustive detail", "there has not been a detailed study of the geometry of belief functions and as such I believe this book would be a very welcome addition to the literature."

October 12 2012:

Fabio has been awarded one of the Next 10 Awards by the Faculty of Technology, Design and Environment (TDE).

The committee overseeing the 'Next 10 Programme' met recently and supported Fabio’s application. Activities should begin this academic year at a point to be agreed with the HoD. Rachel Harrison has been assigned as mentor for the programme and Fabio will also liaise closely with Nigel Crook.
A PhD student will be engaged as soon as possible in order to provide maximum strategic benefit to the development of the planned research and growth of the area. A key objective will be the future development of a successful and focused team. The student will be expected to contribute to such things as the development of major funding proposals in addition to carrying out a formal programme of related PhD study.

Next 10 is a research accelerator programme, designed to help the top emerging researchers in the Faculty to progress towards professorial status and a leadership position within their discipline. Involves a Ph.D. Studentship. Start date: October 2012.

September 2012:

Fabio has taken on the role of Head of the Artificial Intelligence (formerly Machine Learning) research group.

September 5 2012:

Fabio has been awarded the Outstanding Reviewer Award at the latest British Machine Vision Conference (BMVC2012) in Surrey.

July 2012:

Fabio's student Michael Sapienza has been awarded the Best Poster Prize at the latest 2012 INRIA Summer School on Machine Learning and Visual Recognition, for his poster "Learning discriminative space-time actions from weakly labelled videos".

Current state-of-the-art action classification methods derive action representations from the entire video clip in which the action unfolds, even though this representation may include parts of actions and scene context which are shared amongst multiple classes. For example, different actions involving the movement of the hands may be performed whilst walking, against a common background. In this work, we propose an action classification framework in which discriminative action subvolumes are learned in a weakly supervised setting, owing to the difficulty of manually labelling massive video datasets. The learned sub-action models are used to simultaneously classify video clips and to localise actions in space-time. Each subvolume is cast as a BoF instance in an MIL framework, which in turn is used to learn its class membership. We demonstrate quantitatively that the classification performance of our proposed algorithm is comparable and in some cases superior to the current state-of-the-art on the most challenging video datasets, whilst additionally estimating space-time localisation information.

July 19 2011:

Fabio has been promoted to Reader, effective September 1st 2011.

July 25 2011:

Fabio has been awarded a best poster award for a his poster entitled "Geometric conditional belief functions in the belief space" at the latest ISIPTA'11 Symposium on Imprecise Probabilities.

In this poster we explore geometric conditioning in the belief space B, in which belief functions are represented by the vectors of their belief values b(A). We adopt once again distance measures d of the classical Lp family, as a further step towards a complete analysis of the geometric approach to conditioning. We show that geometric conditional b.f.s in B are more complex than in the mass space, less naive objects whose interpretation in terms of degrees of belief is however less natural.

July 19 2011:

Fabio has received his tenure and his now a Senior Lecturer with the Department of Computing and Communication Technologies, Oxford Brookes University.

February 23 2011:

Fabio has been awarded support for his EPSRC First Grant! This is a two-year, 122 K pound grant which will involve hiring a postdoctoral researcher in year 2.

November 12 2010:

Fabio has been nominated Associate Editor of the IEEE Transaction on Systems, Man, and Cybernetics - Part C!

June 15 2010:

Following the latest Workshop on the Theory of Belief Functions, Fabio has been elected in the Board of Directors of the Belief Functions and Applications Society with 27 votes

Fabio Cuzzolin received the best paper award for the outstanding technical contribution assigned to the paper:

Alternative formulations of the theory of evidence based on basic plausibility and commonality assignments

at the Tenth Pacific Rim International Conference on Artificial Intelligence (PRICAI-08), Hanoi, Vietnam, 15-19 December 2008: URL:

The Pacific Rim International Conference on Artificial Intelligence (PRICAI) is a biennial international event which concentrates on AI theories, technologies and their applications in the areas of social and economic importance for countries in the Pacific Rim. In the past conferences have been held in Nagoya (1990), Seoul (1992), Beijing (1994), Cairns (1996), Singapore (1998), Melbourne (2000), Tokyo (2002), Auckland (2004) and Quilin (2006).

The paper introduces two novel alternative mathematical formulations of the theory of belief functions or "theory of evidence". We prove that the equivalent representations of evidence given by plausibility and commonality functions have the combinatorial structure of sum functions, just like belief functions do, and we compute their Moebius inverses.