Research Theme: Smart Sensors for Automotive
Monitoring driver behaviour
Driver's identification
Context-aware smart cars
We are working towards designing an intelligent dashboard, capable of understanding the driver's gestures, and the driver's state (e.g. level of attention, mood) in the perspective of a vehicle capable of fully assisted driving. This goes beyond simple gestural commands as implemented by some carmakers on top of the range cars in the last year or two. Importantly we aim at full body motion interpretation, as opposed to simple localised hand gestures (compare Leap Motion's gestural laptop command interpretation https://www.youtube.com/watch?v=TuBcLbklH5I or Samsung's smart TV gesture recognition technology:https://www.youtube.com/watch?v=wDmosRnEfiw).
Our objective is to go beyond what provided, for example, by Visteon (https://www.youtube.com/watch?v=LlLO33zDZkY) which limits itself to small hand gesture interpretation, and work towards a full interpretation of everything going on within the cockpit that could be relevant.
The outcome of our recent EPSRC project on gait identification proves that people can be identified by the way they walk (walking gait), but also by the way they perform typical gestures, such as the gestural commands drivers send to their smart cars.

This principle can be applied to both driver's identification by gait while they approach their own car, and to in-cockpit identification via gestures. The multilinear models developed in the course of the past project are tailored for batch classification: we need to develop an online version which is robust to all the nuisance factors involved (viewpoint, occlusion, clothing, illumination among others).
This project will leverage on our expertise in real-time 3D reconstruction, camera tracking, online interactive labelling and action recognition to develop novel ground-breaking driver assistive technologies . This technology will allow a car to detect its 3D environment and its current position within it. Furthermore, it allows a user to interactively label objects and surfaces, so that the car may learn and recognise categories such as: 'road', 'car', 'tree', 'pedestrian' etc.

The project's goal is as well to achieve state-of-the-art real-time object tracking and action recognition, in order to predict the trajectory of moving objects (such as other vehicles or pedestrians) in the vicinity of the vehicle, interpret their current behavior and predict their possible future intentions.
Lab Member(s): Suman Saha