About Us






The Lab

The Visual Artificial Intelligence Laboratory was founded in 2012 by Professor Cuzzolin, under the name of 'Machine Learning' research group, and as since been conducting work at the current boundaries of human action recognition in computer vision. Prof Cuzzolin is a leading scientist in the mathematics of uncertainty, in particular random set and belief function theory. The team's interests have since expanded towards machine learning and general artificial intelligence, robotics, big data, and e-Health.

The team is projected to comprise between 18 and 20 members as of 2018, including four faculty members (Prof Chrisina Jayne, Head of School; Dr Faye Mitchell, Postgraduate Lead in Computing; Dr Tjeerd Olde-Scheper, Senior Lecturer).


Research Focus

The group has built in just a few years a leadership position in the field of deep learning for action detection, which has led to be best detection accuracies to date and the only system so far able to localise multiple actions on the image plane in (better than) real time. The team's effort is now shifting towards topics at the frontier of visual AI, such as future action prediction, deep video captioning and the development of a theory of mind for visual AIs. We also working towards applying our expertise in the area to autonomous driving.

Cuzzolin's reputation in uncertainty theory and belief functions comes from the formulation of a geometric approach to uncertainty in which probabilities, possibilities, belief measures and random sets are represented and analysed by geometric means. Current exciting work includes the generalisation of the law of total probability, the notions of upper and lower likelihoods, and that of generalised random variables.

Within machine learning the team's work is directed at understanding the mathematics of deep learning, providing new robust foundations for statistical learning theory, and developing novel tools based on the theory of random sets, in particular generalisation of the logistic regression frameworks and of max-entropy classifiers. We are part of research consortia which aim at applying machine learning to human-robot interaction (the creation of emotional avatars), surgical robotics (robotic assistant surgeon arms) and e-health (home monitoring and the early diagnosis of dementia).




Funding

The Lab currently runs on a budget of around £1.2M (not fully incorporating the €4.3M Horizon 2020 project SARAS). The group receives annually QR funding from the Faculty for around £21,000. The budget is projected to significantly increase in 2019.

Funders so far include the EPSRC, the EU Horizon 2020 initiative, Innovate UK, as well as a number of both international and local partner companies.




Facilities

For our computation-intensive research we rely on a network of several GPU-powered workstations: (1) Mercury - 4 GPUs; (2) Mars - 4 GPUs, 5 HDD disks (18.5 TB), 1 SSD Disk (1TB); (3) Sun - 2 GPUs, 4 HDD disks (12.5 TB); (4) Jupiter - 1 GPU, 2 HDD disks (4.2 TB); (5) Earth - 2 HDD disks (4TB); (6) Venus - 1 HDD (0.5TB). We also possess two CPU servers with 48 CPU cores and 128GB RAM each, and have access to the Faculty's High Performance Computer.

A bid for an additional 10-GPU workstation funded by the School's capital expenditure has been put forward, other equipment may come as a result of pending grant applications.




Robotics & Fab Lab

The adjacent Robotics Lab is rather well-equipped, with 1 Baxter robot with AR10 hand, vacuum and parallel grippers, 1 Robothespian (Artie), 4 Naos, 2 in-house designed and built humanoid robots (Blu), 5 AX-18A Smart Robotic Arms, 6 humanoid Minis. The Fabrication lab is endowed with 5 Aunar Upgraded Desktop and 2 Raise3D N2 Plus 3D Printers. The Performance Augmentation Lab is in possession of 2 Microsoft Hololens and 2 Epson BT200 & BT2000.




Partners

The Visual AI Lab collaborates very closely with various research groups and companies: the Cognitive Robotics research group led by Prof Crook, Oxford University's Torr Vision Group (formerly known as Brookes Vision Group), the Centre for Movement, Occupational and Rehabilitation Sciences (MOReS) led by Prof Dawes, the Autonomous Driving group led by Dr Bradley, Thomas Lukasiewicz, Professor at Oxford University’s Department of Computer Science, the Neuroscience group at Cambridge University, the Performance Augmentation Lab led by Dr Wild, and the Centre for Biomedical Cybernetics of the University of Malta, led by Prof Camilleri.

Industrial partners include Huawei Technologies, BMW Group, Cortexica Vision Systems, Sportslate/Createc, but also Oxehealth, Disney Research, VICON, among others.