Latest News!



December 4 2019

Gurkirt successfully defends his PhD!

Dr Gurkirt Singh has successfully defended his PhD thesis at his viva, held on December 4 2019. The internal examiner was Dr Fridolin Wild, Senior Research Fellow at Oxford Brookes University, whereas the external examiner was Prof Andrea Vedaldi, Associate Professor at the Department of Engineering Sciences of Oxford University.

The title of Gurkirt's work is Online Spatiotemporal Action Detection and Prediction via Causal Representations .

Congratulations to Gurkirt for his excellent PhD work!

November 2019

Area Chair for ECCV 2020

Fabio has been invited to act as Area Chair for the upcoming European Conference on Computer Vision (ECCV 2020), which will be held in Glasgow, August 23-28 2019.

The European Conference on Computer Vision is the top European conference in the image analysis area, and one of the top three computer vision venues, together with ICCV and CVPR. ECCV 2020 will be a four day single-track conference, with additional activities: over 1000 posters, workshops, tutorials, and an industrial exhibition. The conference will present high quality previously unpublished research on many aspects of computer vision.



Important dates:
  • Paper sumission deadline: 5 March 2020;
  • Rebuttal Period: 21 - 27 May 2020;
  • Decisions to Authors: 3 July 2020;
  • Final Version Deadline: 17 July 2020;
  • Conference Dates: 23-28 August 2020.
November 13 2019

India government minister praises Dinesh's doctoral work

Our SARAS research fellow Dinesh Jackson's doctoral thesis on "Tuberculosis Recognition System using Deep Learning Techniques" was publicly praised by Smriti Irani, the India Union minister for women & child development and textiles, speaking at the annual convocation of VIT (Vellore Institute of Technology), India.



Speaking at the annual convocation of VIT (Vellore Institute of Technology), Chennai, Smriti said already students of VIT appeared to garner a sizeable chunk of AI in healthcare system. Specifically mentioning the theses of the students: Jeevakala in computer aided diagnosis system, and Jackson Samuel in TB recognition system, the Minister said she was keen on reading these papers.

A link to a news article can be found here: https://www.deccanchronicle.com/nation/current-affairs/131119/smriti-irani-wants-more-research-for-inclusive-india.html
December 3 2019

Dinesh and Mohamed have joined the Lab

Two new members of staff, Dr Dinesh Jackson Samuel Ravindran Charles and Dr Mohamed Ibrahim Mohamed have joined the Laboratory, as part of the SARAS Horizon 2020 project.

Dinesh completed his PhD studies at Vellore Institute of Technology, Chennai, India. His doctoral dissertation was about the development of a “Cybernetic Tuberculosis (TB) Detection System using Deep Learning Techniques”, to assist technicians in high disease prevalent areas. As part of this research, Dinesh has designed and developed a programmable microscopic stage to automate microscopic examination, which mitigates the reliance on skilled technicians. He has been working as Teaching cum Research Assistant in the Vellore Institute of Technology University from 2014.

Before joining the Lab, Mohamed was working as a Senior Computer Vision Engineer at Huawei Technologies, where he was responsible for designing and architecting computer vision projects into Android phones using Python, Tensorflow and Keras in Linux environment.
Mohamed obtained his PhD in Electrical Engineering in June 2016 from Staffordshire University. His dissertation focused on using machine learning techniques to design real-time event detection algorithms that work robustly in sensor nodes and the development of an intelligent adaptive data reduction algorithm based on Markov Decision Processes (MDPs).





September 17 2019

SARAS's mid-term review was a success!



Check out the SARAS web site here for more up-to-date news.
August 17 2019:

The papers

S. Olivastri, G. Singh and F. Cuzzolin, End-to-End Video Captioning

Link to preprint

G. Singh and F. Cuzzolin, Recurrent Convolutions for Causal 3D CNNs

Link to preprint

were accepted for publication at the First International Workshop on Large Scale Holistic Video Understanding at ICCV 2019, Seoul, South Korea, October 2019.

The International Conference on Computer Vision (ICCV) is the premiere international venue in the field of computer vision. The main objective of the workshop is to establish a video benchmark integrating joint recognition of all the semantic concepts, as a single class label per task is often not sufficient to describe the holistic content of a video. The planned panel discussion with world’s leading experts on this problem will be a fruitful input and source of ideas for all participants.


August 13 2019

Venus has arrived!

Venus, our new SCAN-built 8-GPU workstation endowed with 8 RTX 192GB cards has arrived.

Venus will join our existing machines, Mercury, Mars, Sun and Jupiter, and will constitute the backbone of the computing resources of the Laboratory, practically doubling our processing power and allowing the processing of video clips containing 64 video frames. This will be crucial to allow us to not just match but also outoperform the existing state of the art in action classification and detection.
August 2019

New SARAS postdocs have joined the Lab

Two new members of staff, Prof Inna Skarha-Bandurova and Dr Vivek Singh have just joined the Laboratory, as part of the SARAS Horizon 2020 project.

Inna was previously the Head of the Computer Science and Engineering (CSE) Department, V. Dahl East Ukrainian National University (EUNU), Severodonetsk, Ukraine. She is the author of more than 150 scientific publications, 3 books, 10 academic courses, 44 teaching and learning materials. She has extensive experience working on local and international research projects since 2002, and extensive knowledge and practical experience in different areas of AI, including expert systems, decision support techniques, machine learning.

Vivek was previously a research consultant with Softonics IT Services, NOIDA, and teaching associate at Thapar Institute of Engineering and Technology, Patiala. His research mainly focuses on structural components of deep learning algorithms to enhance their modeling capacity and to overcome their inherent limitations. He also worked on different application of computer vision, deep learning and machine learning algorithms in vision and natural language-based systems.


August 2019

Visit of Professor Ahmad Osman

Professor Ahmad Osman from the Fraunhofer Institute and the Saarland University of Applied Sciences, Germany, is visiting the Visual Artificial Intelligence Laboratory and the School of Engineering, Computing and Mathematics.
Ahmad will stay with us for the month of August.

Ahmad Osman is a Professor for Inspection Technologies and Signal and Image Processing at the Fraunhofer Institute for Nondestructive Testing (IZFP), and the Leader of the AutomaTiQ research group. His research interests span industrial applications of machine learning, sensor fusion in the framework of evidence theory, signal and image processing with a focus on object and defect detection, nondestructive testing methods, quality control and driver assistance systems.

The visit is key to pave the way to a wider collaboration between Oxford Brookes University, the Saarland University of Applied Sciences and the Fraunhofer Institute, covering a number of aspects:
  • A joint Horizon 2020 application to the upcoming i4MS call, deadline November 18 2019, led by KU Leuven;
  • The possibility of establishing a permanent exchange of MSc student in the framework of the Erasmus+ scheme;
  • Joint research in the field of decision making under uncertainty, but also autonomous driving and visual inspection;
  • Finally, the possible joint application to the Marie Curie programme, in collaboration with INSA-Lyon and other partners.


August 5 2019

ICCV 2019 Best Reviewer Award!

PhD student Gurkirt Singh has received a Best Reviewer award by ICCV 2019, the International Conference on Computer Vision, and top venue in the field of computer vision. The award recognises Gurkirt's outstanding work in assessing other scientists' work in a fair and accurate manner.

Congratulations to Guru!


July 17 2019

Third place in the 2019 Formula Student - AI competition

We are absolutely delighted to announce that our brand new Autonomous Formula Student team successfully completed 10 laps of the circuit with a fully self-driving car in the 'Track Drive' dynamic event, coming 3rd place overall. This represents an enormous success in pulling together computer vision, localisation, path planning, control strategies and overall integration of the system to the vehicle.

Of particular note are the following highlights:

  • 1st place in 'Real World Autonomous Driving' presentation
    In a collaboration between the Autonomous Driving research group and the Visual Artificial Intelligence Laboratory, the team delivered an incredibly impressive presentation of real autonomous driving challenges which are the subject of current research at OBU - leading them to score full marks in this element of the competition.
    Thanks to Fabio Cuzzolin, Reza Javanmard, Gurkirt Singh, Peter Ball, Mattias Rolf, Muhammad Hilmi Kamarudin and Gokhan Budan for all your help and support - the trophy belongs to you too!

  • 2nd place in 'Business Plan' event
    Working closely with Oxfordshire County Council and the MAAS:CAV consortium to develop a real business proposal (which we plan to use as a feasibility study in an upcoming research grant application) they achieved an impressive 2nd place in the Business Plan competition - narrowly missing out on first place by less than 1 point!

  • 2nd place in 'Design' competition
    Presenting their autonomous driving software and designs to a panel of industry experts, the team were praised for their innovative ideas, detailed explanations and impressive presentation. This resulted in them being invited by one of the judges to present their work to the staff at RoboRace - the 'Autonomous Formula 1'.
Huge congratulations go to Petar Georgiev and the team of students who made all this happen - We could not be more proud of you!


Formula Student - Artificial Intelligence

Prospectus


July 2 2019:

Fabio was awarded a Leverhulme Trust Research Project Grant, for a project entitled Theory of mind at the interface of neuroscience and AI, in partnership with Professor Barbara Sahakian, Department of Psychiatry, University of Cambridge.

The project will last 30 months, for a total budget of 273,000 pounds, which will be used to hire two postdoctoral research assistants at Oxford Brookes and Cambridge University.

Emerging applications of artificial intelligence are highlighting the limitations of established approaches in situations involving humans. The integration of neuroscience and machine learning has the potential to enable significant advances in both fields. Theory of Mind capabilities, i.e., the ability to 'read' other sentient beings' mental states, are crucial for the development of a next generation, "human-centric" artificial intelligence aimed to understand the behaviour of complex agents. In a mutually beneficial process, computational models developed within artificial intelligence could provide new insights about how these mechanisms work in the human brain.


June 2019

Brookes climbs 8 spots to #33 in the Guardian university guide 2020



May 2019

A new KTP Associate

Dr Neha Bhargava has joined the Lab as the next Associate funded by the Knowledge Transfer Partnership with Createc Technologies and Sportlight.ai. Neha will stay with us for at least two years, until April 2021.

Before joining the Lab, Neha completed her PhD at the Vision and Image Processing Lab of the Indian Institute of Technology (IIT) Bombay, under the supervision of Professor Subhasis Chaudhuri.
Neha conducted her PhD on the topic of understanding crowd behaviour. The purpose of her thesis was to analyse crowd motion at various levels of granularity: Individual, Group and Collective. To tackle the issue she proposed a unified framework for identifying the groups and the activities performed at each level.
As part of this project, Neha will work towards revolutionising sports analytics, and further progress on her previous work on crowd behaviour understanding using (multimodal) deep learning.


April 12 2019:

The paper Evidence Combination Based on Credal Belief Redistribution for Pattern Classification, co-authored by Prof Fabio Cuzzolin, is accepted for publication by the IEEE Transactions on Fuzzy Systems, one of the top CS journals by impact factor (currently 8.415).

Evidence theory, also called belief functions theory, provides an efficient tool to represent and combine uncertain information for pattern classification. Evidence combination can be interpreted, in some applications, as classifier fusion. The sources of evidence corresponding to multiple classifiers usually exhibit different classification qualities, and they are often discounted using different weights before combination. In order to achieve the best possible fusion performance, a new Credal Belief Redistribution (CBR) method is proposed to revise such evidence. The rationale of CBR consists in transferring belief from one class not just to other classes but also to the associated disjunctions of classes (i.e., meta-classes). As classification accuracy for different objects in a given classifier can also vary, the evidence is revised according to prior knowledge mined from its training neighbors. If the selected neighbors are relatively close to the evidence, a large amount of belief will be discounted for redistribution. Otherwise, only a small fraction of belief will enter the redistribution procedure. An imprecision matrix estimated based on these neighbors is employed to specifically redistribute the discounted beliefs. This matrix expresses the likelihood of misclassification (i.e., the probability of a test pattern belonging to a class different from the one assigned to it by the classifier). In CBR, the discounted beliefs are divided into two parts. One part is transferred between singleton classes, whereas the other is cautiously committed to the associated meta-classes. By doing this, one can efficiently reduce the chance of misclassification by modeling partial imprecision. The multiple revised pieces of evidence are finally combined by Dempster-Shafer rule to reduce uncertainty and further improve classification accuracy. The effectiveness of CBR is extensively validated on several real datasets from the UCI repository, and critically compared with that of other related fusion methods.

Paper preprint PDF




March 13, 2019

UKIERI project funded

The Visual AI Laboratory has secured, in partnership with the Indian Institute of Technology (IIT) Bombay funding from UKIERI (the UK-India Education and Research Initiative) for a project on "Analysis of Human Action in Unconstrained Videos". IIT Bombay Director Subhasis Chaudhuri will lead the Indian side of the effort.

Human action detection and recognition from videos are two of the most challenging tasks in computer vision. These problems become even more severe while dealing with fine-grained action categories. An exploration of the evolution of salient bodyparts’ (local motion) is needed in this respect to better discriminate such similar-looking human activities. Dominant action detection paradigms work by locating actions of interest on a frame by frame basis, and linking them up in time to form ‘action tubes’. Moreover, given the vast category of possible actions, it is very hard to annotate labelled training videos in a cost-effective manner. The notion of ‘zero-shot’ classification, which we explain below, can be adopted in such situations for the categorization of previously unexplored human activities. In this perspective, we propose in this project to explore the notion of mid-level feature mining from video data for the sake of:

Human action detection and recognition from videos are two of the most challenging tasks in computer vision. These problems become even more severe while dealing with fine-grained action categories. An exploration of the evolution of salient bodyparts’ (local motion) is needed in this respect to better discriminate such similar-looking human activities. Dominant action detection paradigms work by locating actions of interest on a frame by frame basis, and linking them up in time to form ‘action tubes’. Moreover, given the vast category of possible actions, it is very hard to annotate labelled training videos in a cost-effective manner. The notion of ‘zero-shot’ classification, which we explain below, can be adopted in such situations for the categorization of previously unexplored human activities.


March 2019

Two new members of staff

Two new members of staff have joined the Laboratory.

Dr Reza Javanmard Alitappeh is the new Fellow in AI for Autonomous Driving funded by the School of Engineering, Computing and Mathematics.
Before joining the Lab, Reza was Assistant Professor at the University of Science and Technology of Mazandaran, Iran.
He has been appointed for two years to work on our proposal for decision making in autonomous driving based on endowing machines with theory of mind capabilities, and the validation of these notions in a simulated environment, in collaboration with Andrew Bradley's Autonomous Driving research group.
He will also take charge of the general effort in the area of autonomous driving in the School, and advise the work of the newly created Autonomous Driving Student Society.

Wojtek Buczynski is a PhD student based at Cambridge University, under the supervision of Professor Barbara Sahakian. Professor Cuzzolin has been invited to act as second supervisor on AI aspects. The topic of Wojtek's PhD will centre around the applicability of AI to portfolio allocation in the financial industry.
Wojtek is currently Senior Manager at Fidelity International. He completed his Master’s in Finance at the London Business School in 2011. He obtained his FRM designation in 2014 and a CFA designation in 2015. He am interested in artificial intelligence (AI), cutting-edge technology, FinTech and financial innovation, behavioural finance.


February 15, 2019

Promotion to Professor level 2

On February 15, 2019 the university’s Senior Academic Promotions Committee has considered and approved Prof Cuzzolin's application for promotion to Professor Level 2. The contract amendment was backdated to 1 September 2018.

Fabio would like to thank all external referees who were so kind as to support his application!


January 2019

Internship at Borealis AI, Vancouver

PhD student Gurkirt Singh has started a three-month internship in Vancouver at Borealis AI, a startup funded by Royal Bank of Canada, under the supervision of Professor Greg Mori. He will be working on graph neural networks for human-object interaction.

Borealis AI supports RBC’s innovation strategy through fundamental scientific study and exploration in machine learning theory and applications. The team aims to develop state-of-the-art and supports academic collaborations with world-class research centres in artificial intelligence.


January 2019

Invited talks at ICRA 2019 and the Hamlyn Symposium

Professor Cuzzolin has been invited to speak at the upcoming ICRA 2019 (the International Conference on Robotics and Automation) Workshop: Next Generation Surgery: Seamless integration of Robotics, Machine Learning and Knowledge Representation within the operating rooms .

The use of surgical robots has -beyond doubt- led to advances and improvements in surgery. The next significant forward leap is expected with the introduction of intelligent systems that can operate autonomously, or semi-autonomously in cooperation with the surgeons. In this quest of intelligence, growing synergies from diverse scientific branches have emerged. These include the areas of machine learning, knowledge representation, perceptual interfaces, as well as new robotic concepts and methodologies able to accommodate this ever-increasing body of scientific research. Outside academic research settings, evidence of this exponential growth can also be witnessed in the significant investment committed by commercial surgical robot developers and manufacturers. New high-tech companies and start-ups are also emerging at an increasing rate. The aim of this workshop is to explore the next generation of robotic surgery from different and diverse angles. One aspect concentrates on the most innovative technologies and advances in the fields of robotics, machine learning, artificial intelligence and knowledge representation. A second aspect focus on international scientific projects presented as motivating case studies. Importantly, the industrial point-of-view is accommodated in a “reality testing” role, regarding the current level of adoption of scientific research in the field and future potential.

Professor Cuzzolin has also been invited to speak at the Hamlyn Symposium Workshop: “Towards robotic autonomy in surgery”, London, June 23 2019.

Dexterity and perception capabilities of surgical robots may soon be enhanced by cognitive functions that can support surgeons in decision making and performance monitoring, and enhance surgical quality.
However, the basic elements of autonomy are not well understood and their mutual interaction is unexplored. Current classification of autonomy encompasses six basic levels: Level 0: no autonomy;Level 1: robot assistance; Level 2: task autonomy; Level 3: conditional autonomy; Level 4: high autonomy. Level 5: full autonomy.


October 2018

Paper at ACCV 2018

A paper was accepted for publication at ACCV 2018 (the Asian Conference on Computer Vision), entitled "TraMNet - Transition Matrix Network for Efficient Action Tube Proposals" by Gurkirt Singh, Suman Saha, and Fabio Cuzzolin.

Current state-of-the-art methods solve spatio-temporal action localisation by extending 2D anchors to 3D-cuboid proposals on stacks of frames, to generate sets of temporally connected bounding boxes called action micro-tubes. However, they fail to consider that the underlying anchor proposal hypotheses should also move (transition) from frame to frame, as the actor or the camera do. Assuming we evaluate n 2D anchors in each frame, then the number of possible transitions from each 2D anchor to he next, for a sequence of f consecutive frames, is in the order of O(n^f), expensive even for small values of f. To avoid this problem we introduce a Transition-Matrix-based Network (TraMNet) which relies on computing transition probabilities between anchor proposals while maximising their overlap with ground truth bounding boxes across frames, and enforcing sparsity via a transition threshold. As the resulting transition matrix is sparse and stochastic, this reduces the proposal hypothesis search space from O(nf ) to the cardinality of the thresholded matrix. At training time, transitions are specific to cell locations of the feature maps, so that a sparse (efficient) transition matrix is used to train the network. At test time, a denser transition matrix can be obtained either by decreasing the threshold or by adding to it all the relative transitions originating from any cell location, allowing the network to handle transitions in the test data that might not have been present in the training data, and making detection translation-invariant. Finally, we show that our network is able to handle sparse annotations such as those available in the DALY dataset, while allowing for both dense (accurate) or sparse (efficient) evaluation within a single model. We report extensive experiments on the DALY, UCF101-24 and Transformed-UCF101-24 datasets to support our claims.

The PDF version of the paper can be found: here.


September 2018

New edited volume with Springer

This book constitutes the refereed proceedings of the 5th International Conference on Belief Functions, BELIEF 2018, held in Compiègne, France, in September 2018. The 33 revised regular papers presented in this book were carefully selected and reviewed from 73 submissions. Papers were solicited on theoretical aspects (including for example statistical inference, mathematical foundations, continuous belief functions) as well as on applications in various areas including classification, statistics, data fusion, network analysis and intelligent vehicles.


September 2018

Associate Editorship of the International Journal of Approximate Reasoning

Professor Cuzzolin has accepted an Associate Editor position with the International Journal of Approximate Reasoning.

The International Journal of Approximate Reasoning is intended to serve as a forum for the treatment of imprecision and uncertainty in Artificial and Computational Intelligence, covering both the foundations of uncertainty theories, and the design of intelligent systems for scientific and engineering applications. It publishes high-quality research papers describing theoretical developments or innovative applications, as well as review articles on topics of general interest.
Relevant topics include, but are not limited to, probabilistic reasoning and Bayesian networks, imprecise probabilities, random sets, belief functions (Dempster-Shafer theory), possibility theory, fuzzy sets, rough sets, decision theory, non-additive measures and integrals, qualitative reasoning about uncertainty, comparative probability orderings, game-theoretic probability, default reasoning, nonstandard logics, argumentation systems, inconsistency tolerant reasoning, elicitation techniques, philosophical foundations and psychological models of uncertain reasoning.

The journal is affiliated with the Society for Imprecise Probability: Theories and Applications (SIPTA), and Belief Functions and Applications Society (BFAS).
The Editor-in-Chief is Professor Thierry Denoeux. The 2017 impact factor of IJAR is 1.766.


September 2018

Board position for a new Huawei-SFU research centre

Professor Cuzzolin has started a new position as Executive Committee member for the new Huawei - Simon Fraser University research centre in Vancouver, Canada.


August 2018

New Research Fellow in Artificial Intelligence for Autonomous Driving

The Visual AI Laboratory, in partnership with Dr Matthias Rolf of the Cognitive Robotics group and the Autonomous Driving group led by Dr Andrew Bradley, has secured funding for £100,000 from the the School of Engineering, Computing and Mathematics to support a Research Fellow in Artificial Intelligence for Autonomous Driving, for a period of two years.

The project concerns the design and development of novel ways for robots and autonomous machines to interact with humans in a variety of emerging scenarios, including: human-robot interaction, autonomous driving, personal (virtual or robotic) assistants. In particular, we believe novel, disruptive applications of AI require much more sophisticated forms of communication between humans and machines, something that goes far beyond conventional explicit and linguistic exchange of information towards implicit non-verbal communication and understanding of each other's behaviour.
For example, smart cars need to understand that children and construction workers have different reasoning processes that lead to very different observable behaviour, in order to blend in with the road as a human-centered environment. Empathic machines have the potential to revolutionise healthcare, by providing better care catering for the psychological needs of patients. Morally and socially appropriate behaviour is key in all such scenarios, to build trust and lead to acceptance from the public.
Exciting research is currently going on in moral robotics and AI, including moral development (how a robot can learn moral principles), fairness and bias in, for instance, AI-assisted recruitment. As smart cars head towards real world deployment, the field is shifting from mere perception (e.g. SLAM) to higher-level cognition tasks, starting from the automated detection of road events. Holographic AI is going to revolutionise the field of personal assistants, but needs effective communication interfaces.

Cuzzolin is exploring the design and implementation of a machine theory of mind model based on a simulation approach, in which input stimuli drive an agent-specific simulation of their mental states. Simulations are implemented as reconfigurable deep neural networks, learned by reinforcement learning. Closely related to this, Rolf is investigating socially-originated rewards for reinforcement learning, including pre-linguistic cues such as face detection, synchrony and contingency, as well as investigating robotic moral issues. Both research directions are directly applicable to autonomous driving – the Visual AI Lab is currently providing road event and agent activity annotation for the Oxford RobotCar dataset which is bound to have a significant impact on the field, as the first such benchmark. The benchmark will be released in October 2018. In the first year of the project, the Fellow would implement reinforcement learning based machine theory of mind models and test them on the new data to provide a proof of concept. Bradley has been working in the area of vehicle simulation for many years, and also upon driver behaviour analysis using a driving simulator (with Prof Helen Dawes). Bradley is currently working with Dr Peter Ball on areas of modelling autonomous vehicle behaviour, resulting in a recent Innovate UK application for Connected and Autonomous Vehicle (CAV) simulation.

A Research Fellow Grade 8 position (starting salary: £30,688) will be advertised as soon as September 2018.


August 2018

New Knowledge Transfer Partnership with Createc and Sportslate

A Knowledge Transfer Partnership (KTP) with Createc and Sportslate, two successful spinoffs of Oxford University, was funded at the lates round by Innovate UK.

The project is split into two key phases each taking approximately 12 months, aiming to demonstrate a simple proof of concept at the mid-point with the second year focused on maturation, refinement and steps to commercialisation. The first phase will consist of the Associate reviewing the state of the art and conducting a literature review, understanding the hardware and system architecture and capturing further datasets for algorithmic training, in addition to the following technical work packages:
  1. Sensor fusion: The company's system provides not only video imagery from multiple viewpoints but also data providing depth, dynamic data and point cloud overlays over the imagery. This enables a novel approach to action identification where this extra information can be integrated with the video to enhance performance
  2. Person segmentation: The first task ahead of person or action identification is to segment the person from the background which due to the tracking system is highly dynamic. This is a key enabling task but there are multiple existing techniques for performing this task
  3. Person identification: It is important for all applications to associate an action with an individual. In the crowd monitoring case, single actions may be inconsequential but an individual carrying out multiple actions may be of more interest
  4. Single person action identification: This task will develop algorithms for identifying single person actions from the video data
These will be integrated for a proof of concept demonstration in month 13. The second phase of the work will integrate the algorithms with real customer datasets and other datasets held by Createc, enabling testing of the algorithms under a wide range of conditions. Inevitably this will lead to algorithm refinement. This work is important to demonstrate that the approaches can be used commercially with real data, therefore de-risking commercial exploitation beyond this project. Technically this phase will also include extension of the single person action identification to multi-people events, and for the system to understand these links.
Towards the end of the project, the algorithms and capabilities will be marketed to prospective customers, and the Associate will work on development of marketing material, videos and academic papers/presentations to raise the profile of the work.

A KTP Associate position will be advertised as soon as September 2018. Salary will be in the range 30,000 - 35,000 per annum.


July 30 2018

Paper at ECCV 2018 Workshop on Anticipating Human Behaviour

A paper was accepted for publication at the ECCV 2018 (the European Computer Vision Conference) AHB Workshop, entitled "Predicting Action Tubes" by Gurkirt Singh, Suman Saha, and Fabio Cuzzolin.

The purpose of this workshop is to discuss recent approaches that anticipate human behavior from video or other sensor data, to bring together researchers from multiple fields and perspectives, and to discuss major research problems and opportunities and how we should coordinate efforts to advance the field.

In this work, we present a method to predict an entire ‘action tube’ (a set of temporally linked bounding boxes) in a trimmed video just by observing a smaller subset of it. Predicting where an action is going to take place in the near future is essential to many computer vision based applications such as autonomous driving or surgical robotics. Importantly, it has to be done in realtime and in an online fashion. We propose a Tube Prediction network (TPnet) which jointly predicts the past, present and future bounding boxes along with their action classification scores. At test time TPnet is used in a (temporal) sliding window setting, and its predictions are put into a tube estimation framework to construct/predict the video long action tubes not only for the observed part of the video but also for the unobserved part. Additionally, the proposed action tube predictor helps in completing action tubes for unobserved segments of the video. We quantitatively demonstrate the latter ability, and the fact that TPnet improves state-of-the-art detection performance, on one of the standard action detection benchmarks - J-HMDB-21 dataset.

The PDF version of the paper can be found: here.


Summer 2018

The 3,600 mile experiment: Parkinson's disease on the ocean

The Visual AI Lab is a partner in the ongoing Parkinson's row in the Indian Ocean.

A crew are rowing across the Indian Ocean to shake up our understanding of Parkinson's disease—and break a world record while they're at it. For people with Parkinson's disease, exercise is prescribed to treat the symptoms most commonly associated with the condition. The muscle tremors, cramps and gait issues that characterise the disease appear to be mitigated with physical activity. Anecdotally, we know that endurance activities appear to be more beneficial for these physical symptoms, lessening the need for medication. But that's about as far as our understanding goes of the relationship between physical activity and Parkinson's disease. For instance, exercise doesn't seem to ward off the other, less visible symptoms of the disease in the same way and we don't know why. Fatigue, one of Parkinson's most disabling symptoms, appears to persist with sufferers even if they exercise. Why does one set of symptoms improve but not the other? Is endurance exercise key in that more is always better? Does endurance exercise affect Parkinson's sufferers differently to healthy people? What better way to answer these questions than to row a boat for 65 days straight, all the way from West Australia to Mauritius?

Robin Buttery, Barry Hayes, James Plumley and skipper Billy Taylor are planning on rowing across the Indian Ocean. Robin was diagnosed with young onset Parkinson's disease 2 years ago, just before his 44th birthday. Determined to show that life doesn't stop with his diagnosis, he's taken on the formidable challenge of rowing 2 hours on, 2 hours off for 12 weeks straight. Whilst it's marketed as an attempt to beat the world record, the row will hopefully serve another purpose. Behind the scenes of this international expedition are Professor Helen Dawes, Professor Fabio Cuzzolin and Dr. Johnny Collett of Oxford Brookes University in the UK. For them, the row is a scientific experiment, and the crew are their lab rats.

The event was recently covered in the following media pieces: “The 3,600 mile experiment: Parkinson's disease on the ocean” – MedicalXpress, June 25 2018; “Row for Parkinson’s” – The West Australian, 7 July 2018; “British Crew Rowing the Distance to Improve Understanding of Parkinson’s Disease”, June 27 2018.


May 2018

New paper at BMVC 2018

A paper was accepted for publication at BMVC 2018, the British Machine Vision Conference, entitled "Incremental Tube Construction for Human Action Detection" by Harkirat Behl, Michael Sapienza, Gurkirt Singh, Suman Saha, Fabio Cuzzolin and Philip H. S. Torr, a joint work with Oxford University's Torr Vision Group.

The British Machine Vision Conference (BMVC) is the British Machine Vision Association (BMVA) annual conference on machine vision, image processing, and pattern recognition. It is one of the major international conferences on computer vision and related areas held in the UK. As its increasing popularity and quality, it has established as a prestigious event on the vision calendar.

Current state-of-the-art action detection systems are tailored for offline batch-processing applications. However, for online applications like human-robot interaction, current systems fall short. In this work, we introduce a real-time and online joint-labelling and association algorithm for action detection that can incrementally construct space-time action tubes on the most challenging untrimmed action videos in which different action categories occur concurrently. In contrast to previous methods, we solve the linking, action labelling and temporal localization problems jointly in a single pass. Our online algorithm outperforms the current state-of-the-art offline and online systems in terms of accuracy with a margin of 16% in mAP, and in terms of speed (1.8ms per frame). We further demonstrate that the entire action detection pipeline can easily be made to work effectively in real-time using our action tube construction algorithm.

The PDF version of the paper can be found: here.


July 10 2018

Invited talk at COSUR 2018

Fabio was invited to speak at the upcoming COSUR 2018 Summer School on Surgical Robotics.

The main objective of COSUR 2018 is to introduce PhD students and Post-Doctoral fellows to the multidisciplinary research field of surgical robotics, with particular focus on the control algorithms used in robotic surgery and the impact of cognition in directing the control. We will offer lectures, hands-on laboratory experience, and opportunity for informal interaction with clinicians and leading experts from academia and industry. The school will go beyond the current approach of doctoral schools and will give trainees an in depth understanding of cognition and control in robotic surgery.


June 2018

Brookes rises by 9 places in the Guardian university guide 2019



May 2018

Two papers accepted at BELIEF 2018

Two papers were accepted for publication at the joint SMPS-BELIEF 2018 International Conference, entitled "General geometry of belief function combination" and "Generalised max entropy classifiers".

The BELIEF and SMPS conferences are biennial events concerning the modeling of uncertainty. The BELIEF conferences are sponsored by the Belief Functions and Applications Society (BFAS) and are focused on the theory of belief functions, while the scope of SMPS covers the application of all approaches to uncertainty (including fuzzy and rough sets, imprecise probabilities, etc.) to statistics and data analysis. The co-location of the two events is intended to favor cross-fertilization among researchers active in both communities.

General geometry of belief function combination: PDF version.
Generalised max entropy classifiers: PDF version.


June 1 2018

Invited tutorial at Seoul National University

Prof Cuzzolin was invited to give a tutorial on "Belief functions: A gentle introduction" at the Department of Statistics of Seoul National University, the top Korean university.

The event was organised by Associate Teaching Professor Hyeyoung Jung. Tutorial slides are available here.


May 2018

Three new visitors joining the Lab!

Three new visitors have joined the Laboratory in May.
Valentina Fontana is an MSc student from University Federico II in Naples, and is part of as an Erasmus+ exchange programme with the local IDEAinVR lab led by Prof Giuseppe Di Gironimo. Valentina will stay with us until September, and work on a dissertation on recognising complex road events for autonomous driving.

Silvio Olivastri is a Visiting Researcher from AI Labs, Bologna, Italy. AI Labs is seeking a longer term partnership with the Visual AI Lab. Silvio will work on the deep video captioning project started by former postdoc Ruomei Yan.

Santanu Rathod is a second year student from IIT Bombay, part of an exchange programme between Brookes' ECM School and IIT. Santanu will work on deep predicting future actions, over a period of three months.


May 5-9, 2018

The Fifth Bayesian, Fiducial and Frequentist conference (BFF 5)

Fabio has been invited to speak at the latest edition of the Bayesian, fiducial and frequentist (BFF) series of statistical conferences.

The BFF series began in 2014 with the goals of facilitating the exchange of research developments in Bayesian, fiducial and frequentist (BFF) methodology, to bridge gaps among the different statistical paradigms, stimulate collaborations, and foster opportunities for involvement of new researchers. Over the last four years, these meetings have served as a platform for comparing and connecting methods and theory from the differing, yet related, BFF perspectives.

The 2018 BFF5 will focus on the theme of “Foundations of Data Science”. Invited talks (with the length of 30 minutes) are encouraged to be aligned with re-examining the role and reporting new advances on the foundations of statistical inference in this new era of data science. This year, we are also offering short courses on fiducial statistics and confidence distributions on Sunday, May 6, followed by the main conference on May 7-9. The short courses will prepare conference attendees to better participate in the scientific programs of the main conference. A conference banquet is planned for the evening of Monday, May 7. Dr. Glenn Shafer will be the banquet speaker.

Conference announcement


January 24 2018:

Towards machines that can read your mind, Professorial lecture, Brookes Open LEcture Series.

Professor Fabio Cuzzolin explores how intelligent machines can negotiate a complex world, fraught with uncertainty. To enable machines to deal with situations they have never encountered in the safest possible way. Interacting naturally with human beings and their complex environments will only be possible if machines are able to put themselves in people’s shoes: to guess their goals, beliefs and intentions – in other words, to read our minds.
Fabio explains just how machines can be provided with this mind-reading ability.

Watch it on Facebook here: https://www.facebook.com/oxfordbrookes/videos/10156698398637908/

Watch it with slides on the Brookes Open Lecture series web site: https://lecturecapture.brookes.ac.uk/Mediasite/Play/9c48ee97ce964dc6a3389836dcacfc0b1d

PDF slides are available here.


August 8 2017:

Fabio was awarded the Horizon 2020 project "SARAS - Smart Autonomous Robotic Assistant Surgeon", on the development of robotic assistant surgeons for laparoscopy.

The team will be in charge of the vision and cognitive modules of the system. The project has a total budget of €4,315,640: Oxford Brookes' share is €596,073. The project's duration is of 3 years. The agreed start date is Mar 1st 2018. The Coordinator is Dr Riccardo Muradore from University of Verona, Italy. Fabio's role will be Scientific Officer (SO) for the whole project, as well as WP Leader.

List of Horizon 2020 projects funded in 2017

In surgical operations many people crowd the area around the operating table. The introduction of robotics in surgery has not decreased this number. During a laparoscopic intervention with the da Vinci robot, for example, the presence of an assistant surgeon, two nurses and an anaesthetist, is required, together with that of the main surgeon teleoperating the robot. The assistant surgeon needs always be present to take care of simple surgical tasks the main surgeon cannot perform with the robotic tools s/he is teleoperating (e.g. suction and aspiration during dissection, moving or holding organs in place to make room for cutting or suturing, using the standard laparoscopic tools). Another expert surgeon is thus required to play the role of the assistant, to properly support the main surgeon using traditional laparoscopic tools as shown in Figure 1.

The goal of SARAS is to develop a next-generation surgical robotic platform that allows a single surgeon (i.e., without the need for an expert assistant surgeon) to execute robotic minimally invasive surgery (R-MIS), thereby increasing the social and economic efficiency of a hospital while guaranteeing the same level of safety for patients. This platform is called solo-surgeon system.


July 24 2017:

The Artificial Intelligence and Vision team, led by PhD student Gurkirt Singh, in partnership with Andreas Lehrmann and Leonid Sigal of Disney Research, has won second place in the latest CVPR2017 Charades Activity Challenge for action recognition, behind DeepMind's TeamKinetics led by Andrew Zisserman, third place for temporal detection. Leaderboard

The Charades Activity Challenge aims towards automatic understanding of daily activities, by providing realistic videos of people doing everyday activities. The Charades dataset is collected for an unique insight into daily tasks such as drinking coffee, putting on shoes while sitting in a chair, or snuggling with a blanket on the couch while watching something on a laptop. This enables computer vision algorithms to learn from real and diverse examples of our daily dynamic scenarios. The challenge consists of two separate tracks: classification and localization track. The classification track is to recognize all activity categories for given videos ('Activity Classification'), where multiple overlapping activities can occur in each video. The localization track is to find the temporal locations of all activities in a video ('Activity Localization').

Method's description

At a high level, our approach consists of two parallel convolutional neural networks (CNNs) extracting static (i.e., independent) appearance and optical flow features for each frame, plus, there is another parallel audio feature extraction stream using Soundnet CNN and scoring done using an SVM. We fuse information from three streams using a convex combination of their respective classification scores to obtain a final result.
We train the overall network using a multi-task loss: (1) Classification: Both streams produce a C-dimensional softmax score vector that is trained using back-propagation with a cross-entropy loss; (2) Regression: In addition to the classification scores, the appearance stream also produces 3-dim. coefficients for each class describing the offset from the boundaries of the current action as well as its overall duration. This network path is trained using a smooth L1 loss.
The audio stream consists of feature extraction using pretrained soundet CNN and SVM classifier to produce classification in sliding window fashion. Audio scores are interpolated to the same frame as other two stream outputs.
We generate frame-level scores at 12 fps. For temporal action segmentation, we fuse the scores of three streams at the frame-level using a convex combination. The weights to each stream can be found by cross-validation on the validation set. Finally, we produce a score vector for 25 regularly sampled frames using top-k mean-pooling in a temporal window around those frames. Frame-level score for each class is the mean of the top-20 frame-level scores of class c in a temporal window of size 40. Similarly, we apply top-k mean pooling on the scores for class c for the entire duration of video to obtain video classification scores. We found that top-k value of 40 works well via cross-validation.


July 16 2017:

The papers

G. Singh, S. Saha, M. Sapienza, P. Torr and F. Cuzzolin, Online Real-time Multiple Spatiotemporal Action Localisation and Prediction

Link to arXiv version

S. Saha, G. Singh and F. Cuzzolin, AMTnet: Action-Micro-Tube regression by end-to-end trainable deep architecture

Link to arXiv version

were accepted for publication at the International Conference on Computer Vision (ICCV 2017), Venice, Italy, October 2017 - the premiere venue for Computer Vision, as part of the ongoing world-leading action detection project at the Artificial Intelligence and Vision group.


July 6 2017:

Fabio was invited to speak at the Fourth Summer School on Belief Functions and Their Applications (BELIEF 2017)

Title of the talk: The statistics of belief functions

Although born within the remit of mathematical statistics, the theory of belief functions has later evolved towards subjective interpretations which have distanced it from its mother field, and have drawn it nearer to artificial intelligence. The purpose of this talk, in its first part, is to understanding belief theory in the context of mathematical probability and its main interpretations, Bayesian and frequentist statistics, contrasting these three methodologies according to their treatment of uncertain data.
In the second part we recall the existing statistical views of belief function theory, due to the work by Dempster, Almond, Hummel and Landy, Zhang and Liu, Walley and Fine, among others.
Finally, we outline a research programme for the development of a fully-fledged theory of statistical inference with random sets. In particular, we discuss the notion of generalised lower and upper likelihoods, the formulation of a framework for logistic regression with belief functions, the generalisation of the classical total probability theorem to belief functions, the formulation of parametric models based of random sets, and the development of a theory of random variables and processes in which the underlying probability space is replaced by a random set space.


June 2017:

Fabio is elected Executive Editor of the Society for Imprecise Probability - Theory and Applications (SIPTA),

The Society for Imprecise Probability: Theories and Applications (SIPTA) was created in February 2002, with the aim of promoting the research on imprecise probability. This is done through a series of activities for bringing together researchers from different groups, creating resources for information, dissemination and documentation, and making other people aware of the potential of imprecise probability models.
The Society has its roots in the Imprecise Probabilities Project conceived in 1996 by Peter Walley and Gert de Cooman and its creation has been encouraged by the success of the ISIPTA conferences.
Imprecise probability is understood in a very wide sense. It is used as a generic term to cover all mathematical models which measure chance or uncertainty without sharp numerical probabilities. It includes both qualitative (comparative probability, partial preference orderings, …) and quantitative modes (interval probabilities, belief functions, upper and lower previsions, …). Imprecise probability models are needed in inference problems where the relevant information is scarce, vague or conflicting, and in decision problems where preferences may also be incomplete.


June 13 2017:

The paper The Total Belief Theorem, authored by Dr Chunlai Zhou and Professor Fabio Cuzzolin, is accepted for publication at Uncertainty in Artificial Intelligence (UAI) 2017

In this paper, motivated by the treatment of conditional constraints in the data association problem, we state and prove the generalisation of the law of total probability to belief functions, as finite random sets.
Our results apply to the case in which Dempster's conditioning is employed. We show that the solution to the resulting total belief problem is in general not unique, whereas it is unique when the a-priori belief function is Bayesian. Examples and case studies underpin the theoretical contributions.
Finally, our results are compared to previous related work on the generalisation of Jeffrey’s rule by Spies and Smets.

Paper submission PDF


October 2016:

Podcast with Risk Roundup: Advances in AI: Human/Non-Human Action and Gesture Recognition Prof. Fabio Cuzzolin, Head of Artificial Intelligence and Vision at Oxford Brookes University, Oxford, United Kingdom participates in Risk Roundup to discuss ''Advances in Artificial Intelligence: Human and Non-Human Gesture and Action Recognition''.

How would we define and describe man-machine or a machine-machine interface and why is it relevant to understanding Artificial Intelligence? Mediator between human (and non-human users) and machines, a man-machine or machine-machine interface, is basically a system that takes care of the entire human-non-human communication process. It is responsible for the delivery of the machine or computer knowledge, functionality and available information, in a way that is compatible with the end-user’s communication channels, be it human or non-human. It then translates the user’s (human or non-human) actions (user input) into a form (instructions/commands) that is understandable by a machine.

When increasingly complex Artificial Intelligence based systems, products and services are rapidly emerging across nations, the necessity for more user friendly man-machine or machine-machine interface is becoming increasingly necessary for their effective utilization, and consequently for the success that they were designed for.

Published on Risk Group: https://www.riskgroupllc.com/advances-in-artificial-intelligence-human-and-non-human-gesture-and-action-recognition/


October 2016:

Fabio has been invited to be a keynote speaker at CSA 2016, the The 2nd Conference on Computing Systems and Applications. The second edition of the Computing Systems and Applications (CSA) conference will take place from December 13 through December 14, 2016. The conference is open for researchers, academics and industry practitioners interested in the latest scientific and technological advances occurring in different fields of computer science. It constitutes a leading venue for students, researchers, academics and industrials to share their new ideas, original research findings and practical experiences across all computer science disciplines.

CSA 2016 will be held in the Ecole Militaire Polytechnique (EMP) located in Algiers; the capital and the largest city of Algeria. This pioneering engineering college is situated in Bordj El Bahri, a lively city lapped by the Mediterranean Sea and facing the well-known Algiers bay. EMP is one of the oldest technical schools for the training of highly-qualified academics in Algeria. Its know-how covers teaching and research activities in the fields of computer science, electrical and mechanical engineering, and chemistry.

Download the Call for Papers at http://www.emp.edu.dz/csa/pdf/CSA_2016_CFP_Final_Flyer.pdf


July 2016:

Fabio is promoted to Professor



July 14 2016:

invited seminar "Belief functions: past, present and future", part of the statistics colloquia at Harvard University, Department of Statistics.

https://www.facebook.com/HarvardChan.Biostatistics/photos/pb.395681920606124.-2207520000.1468515615./606757716165209/?type=3

The theory of belief functions, sometimes referred to as evidence theory or Dempster-Shafer theory, was first introduced by Arthur P. Dempster in the context of statistical inference, to be later developed by Glenn Shafer as a general framework for modelling epistemic uncertainty. Belief theory and the closely related random set theory form a natural framework for modelling situations in which data are missing or scarce: think of extremely rare events such as volcanic eruptions or power plant meltdowns, problems subject to huge uncertainties due to the number and complexity of the factors involved (e.g. climate change), but also the all-important issue with generalisation from small training sets in machine learning.

This short talk abstracted from an upcoming half-day tutorial at IJCAI 2016 is designed to introduce to non-experts the principles and rationale of random sets and belief function theory, review its rationale in the context of frequentist and Bayesian interpretations of probability but also in relationship with the other main approaches to non-additive probability, survey the key elements of the methodology and the most recent developments, discuss current trends in both its theory and applications. Finally, a research program for the future is outlined, which include a robustification of Vapnik' statistical learning theory for an Artificial Intelligence 'in the wild'.

Slides in PDF format


July 13 2016:

The paper Deep Learning for Detecting Multiple Space-Time Action Tubes in Videos, led by first author Suman Saha, was accepted for publication at BMVC 2016 Project web site

In this work we propose a new approach to the spatiotemporal localisation (detection) and classification of multiple concurrent actions within temporally untrimmed videos. Our framework is composed of three stages.
In stage 1, a cascade of deep region proposal and detection networks are employed to classify regions of each video frame potentially containing an action of interest. In stage 2, appearance and motion cues are combined by merging the detection boxes and softmax classification scores generated by the two cascades. In stage 3, sequences of detection boxes most likely to be associated with a single action instance, called {action tubes}, are constructed by solving two optimisation problems via dynamic programming.
While in the first pass action paths spanning the whole video are built by linking detection boxes over time using their class-specific scores and their spatial overlap, in the second pass temporal trimming is performed by ensuring label consistency for all constituting detection boxes.
We demonstrate the performance of our algorithm on the challenging UCF101, J-HMDB-21 and LIRIS-HARL datasets, achieving new state-of-the-art results across the board and significantly lower detection latency at test time.

Arxiv paper coming soon


July 1 2016:

The Artificial Intelligence and Vision research group, led by PhD student Gurkirt Singh, has won second place in the latest CVPR ActivityNet Large Scale Activity Detection Challenge. Leaderboard

The ActivityNet Large Scale Activity Recognition Challenge is a half-day workshop to be held on July 1 in conjuction with CVPR 2016, in Las Vegas, Nevada. In this workshop, we establish a new challenge to stimulate the computer vision community to develop new algorithms and techinques that improve the state-of-the-art in human activity understanding. The data of this challenge is based on the newly published ActivityNet benchmark.

The challenge focuses on recognizing high-level and goal oriented activities from user generated videos, similar to those found in internet portals. This challenge is tailored to 200 activity categories in two different tasks. (a) Untrimmed Classification Challenge: Given a long video, predict the labels of the activities present in the video; (b) Detection Challenge: Given a long video, predict the labels and temporal extents of the activities present in the video.

Report in PDF format


January 2016:

Fabio's tutorial "Belief functions for the working scientist" has been accepted for a half-day presentation at IJCAI 2016, the premiere international conference on Artificial Intelligence, which will take place at the Hilton Midtown Hotel, New York City, on July 9-15 2016.

http://ijcai-16.org/

A dedicated web site can be found HERE.

The theory of belief functions, sometimes referred to as evidence theory or Dempster-Shafer theory, was first introduced by Arthur P. Dempster in the context of statistical inference, and was later developed by Glenn Shafer as a general framework for modelling epistemic uncertainty. Belief theory and the closely related random set theory form a natural framework for modelling situations in which data are missing or scarce: think of extremely rare events such as volcanic eruptions or power plant meltdowns, problems subject to huge uncertainties due to the number and complexity of the factors involved (e.g. climate change), but also the all-important issue with generalisation from small training sets in machine learning.

This tutorial is designed to introduce the principles and rationale of random sets and belief function theory to the wider AI audience, survey the key elements of the methodology and the most recent developments, make AI practitioners aware of the set of tools that have been developed for reasoning in the belief function framework on real-world problems. Attendees will acquire first-hand knowledge of how to apply these tools to significant problems in major application fields such as computer vision, climate change, and others. The performance of these approaches will be critically compared with those of more classical regression, classification or estimation methods to highlight the advantage of modelling lack of data explicitly.


Februry 2015:

Fabio was invited at the Oxford Martin School workshop on "Artificial Intelligence and Predictive Modelling" with Garry Kasparov

http://www.oxfordmartin.ox.ac.uk/news/2015_Kasparov_visit

Fabio was also invited to a private dinner with Garry and other distinguished guests at Balliol College.

When Garry Kasparov visited the Oxford Martin School this week, he came with a strong message about innovation: society has become too risk averse and we are at risk of failing to innovate if investor mindsets don’t change soon. During two lively workshops, the former World Chess Champion debated the future of innovation with 20 researchers from the University of Oxford, Oxford Brookes and industry. He also delivered a lecture to an audience of 440 at the University of Oxford’s Examination Schools. Top of Kasparov’s agenda was the issue of risk aversion and its impact on societal progress. “A fear of uncertainty holds us back from doing things quickly and productively,” he argued in his second workshop. “Just look the airline industry. Planes are getting better in terms of comfort and fuel efficiency but not going faster. Our preference is for comfort over speed. This mentality is reflected in many different areas; we have become a risk averse society.”


September 2014:

Fabio's monograph entitled "Visions of a Generalized Probability Theory" has been published by Lambert Academic Publishing

https://www.lap-publishing.com/

The theory of evidence (also known as ‘evidential reasoning’, ‘belief theory’ or ‘Dempster-Shafer theory’) is, perhaps, one of the most successful frameworks for uncertainty modelling, and arguably the most straightforward and intuitive approach to a generalized probability theory. Emerging in the late Sixties from a profound criticism of the more classical Bayesian theory of inference and modelling of uncertainty, evidential reasoning has stimulated in the last four decades an extensive discussion on the epistemic nature of both subjective ‘degrees of beliefs’ and frequentist ‘chances’.

Computer vision is a fast growing discipline whose ambitious goal is to equip machines with the intelligent visual skills humans and animals are provided by Nature, allowing them to interact effortlessly with complex and inherently uncertain environments. This Book shows how the fruitful interaction of computer vision and belief calculus is capable of stimulating significant advances in both fields. Novel results on the mathematics of belief functions are developed in response to the issues posed by fundamental vision problems to which, in turn, novel evidential solutions are proposed.


September 2014:

Springer's Lecture Notes in Artificial Intelligence Volume 8764 entitled Belief Functions: Theory and Applications, edited by Fabio, is available online.

http://www.springer.com/computer/ai/book/978-3-319-11190-2

Belief Functions: Theory and Applications
Third International Conference, BELIEF 2014, Oxford, UK, September 26-28, 2014. Proceedings
Series: Lecture Notes in Computer Science, Vol. 8764
Subseries: Lecture Notes in Artificial Intelligence
Cuzzolin, Fabio (Ed.)
2014, XVIII, 444 p. 92 illus.

This book constitutes the thoroughly refereed proceedings of the Third International Conference on Belief Functions, BELIEF 2014, held in Oxford, UK, in September 2014. The 47 revised full papers presented in this book were carefully selected and reviewed from 56 submissions. The papers are organized in topical sections on belief combination; machine learning; applications; theory; networks; information fusion; data association; and geometry.


September 26-28 2014:

The Third Edition of the International Conference on Belief Functions was successfully held in St. Hugh's college, Oxford.

http://cms.brookes.ac.uk/staff/FabioCuzzolin/BELIEF2014/

BELIEF 2014, the third edition of the series of conferences on the theory and application of belief functions is already over, and it is time to sum up the outcomes of this exciting experience and draw some lessons for the future of the conference and the community at large.


November 2012:

Fabio's monograph on "The geometry of uncertainty" has been conditionally approved by Springer-Verlag's "Information Science and Statistics" series

http://www.springer.com/series/3816

The book is about the geometry of various mathematical descriptions of uncertainty, known as "imprecise probabilities", proposed in the last forty years as alternatives or competitors to classical probability theory. These objects can be seen as points living in a certain geometrical space: they can therefore be handled by geometric means. The book provides indeed a geometrical language for working with imprecise probabilities.

The reviewers commented that "there is no other book addressing the Dempster-Shafer theory of evidence in such exhaustive detail", "there has not been a detailed study of the geometry of belief functions and as such I believe this book would be a very welcome addition to the literature."


October 12 2012:

Fabio has been awarded one of the Next 10 Awards by the Faculty of Technology, Design and Environment (TDE).

The committee overseeing the 'Next 10 Programme' met recently and supported Fabio’s application. Activities should begin this academic year at a point to be agreed with the HoD. Rachel Harrison has been assigned as mentor for the programme and Fabio will also liaise closely with Nigel Crook.
A PhD student will be engaged as soon as possible in order to provide maximum strategic benefit to the development of the planned research and growth of the area. A key objective will be the future development of a successful and focused team. The student will be expected to contribute to such things as the development of major funding proposals in addition to carrying out a formal programme of related PhD study.

Next 10 is a research accelerator programme, designed to help the top emerging researchers in the Faculty to progress towards professorial status and a leadership position within their discipline. Involves a Ph.D. Studentship. Start date: October 2012.


September 2012:

Fabio has taken on the role of Head of the Artificial Intelligence (formerly Machine Learning) research group.


September 5 2012:

Fabio has been awarded the Outstanding Reviewer Award at the latest British Machine Vision Conference (BMVC2012) in Surrey.


July 2012:

Fabio's student Michael Sapienza has been awarded the Best Poster Prize at the latest 2012 INRIA Summer School on Machine Learning and Visual Recognition, for his poster "Learning discriminative space-time actions from weakly labelled videos".

Current state-of-the-art action classification methods derive action representations from the entire video clip in which the action unfolds, even though this representation may include parts of actions and scene context which are shared amongst multiple classes. For example, different actions involving the movement of the hands may be performed whilst walking, against a common background. In this work, we propose an action classification framework in which discriminative action subvolumes are learned in a weakly supervised setting, owing to the difficulty of manually labelling massive video datasets. The learned sub-action models are used to simultaneously classify video clips and to localise actions in space-time. Each subvolume is cast as a BoF instance in an MIL framework, which in turn is used to learn its class membership. We demonstrate quantitatively that the classification performance of our proposed algorithm is comparable and in some cases superior to the current state-of-the-art on the most challenging video datasets, whilst additionally estimating space-time localisation information.


July 19 2011:

Fabio has been promoted to Reader, effective September 1st 2011.


July 25 2011:

Fabio has been awarded a best poster award for a his poster entitled "Geometric conditional belief functions in the belief space" at the latest ISIPTA'11 Symposium on Imprecise Probabilities.

In this poster we explore geometric conditioning in the belief space B, in which belief functions are represented by the vectors of their belief values b(A). We adopt once again distance measures d of the classical Lp family, as a further step towards a complete analysis of the geometric approach to conditioning. We show that geometric conditional b.f.s in B are more complex than in the mass space, less naive objects whose interpretation in terms of degrees of belief is however less natural.


July 19 2011:

Fabio has received his tenure and his now a Senior Lecturer with the Department of Computing and Communication Technologies, Oxford Brookes University.


February 23 2011:

Fabio has been awarded support for his EPSRC First Grant! This is a two-year, 122 K pound grant which will involve hiring a postdoctoral researcher in year 2.


November 12 2010:

Fabio has been nominated Associate Editor of the IEEE Transaction on Systems, Man, and Cybernetics - Part C!


June 15 2010:

Following the latest Workshop on the Theory of Belief Functions, Fabio has been elected in the Board of Directors of the Belief Functions and Applications Society with 27 votes


Fabio Cuzzolin received the best paper award for the outstanding technical contribution assigned to the paper:

Alternative formulations of the theory of evidence based on basic plausibility and commonality assignments

at the Tenth Pacific Rim International Conference on Artificial Intelligence (PRICAI-08), Hanoi, Vietnam, 15-19 December 2008: URL: http://www.jaist.ac.jp/PRICAI-08/

The Pacific Rim International Conference on Artificial Intelligence (PRICAI) is a biennial international event which concentrates on AI theories, technologies and their applications in the areas of social and economic importance for countries in the Pacific Rim. In the past conferences have been held in Nagoya (1990), Seoul (1992), Beijing (1994), Cairns (1996), Singapore (1998), Melbourne (2000), Tokyo (2002), Auckland (2004) and Quilin (2006).

The paper introduces two novel alternative mathematical formulations of the theory of belief functions or "theory of evidence". We prove that the equivalent representations of evidence given by plausibility and commonality functions have the combinatorial structure of sum functions, just like belief functions do, and we compute their Moebius inverses.