Emerging applications of artificial intelligence (AI) are raising awareness of the limitations of established machine learning (ML) approaches in situations in which
humans are involved. Smart cars, for instance, need to make reliable predictions about human behaviour in real time, e.g. in order to pre-emptively adjust speed and course
to cope with a group of children’s possible decision to suddenly cross the road in front of them. To date, the automated recognition of present behaviour builds on the recent
success of deep learning , based on artificial neural networks with an increased number of layers, to efficiently identify motion patterns in the available streaming videos
[2,3]. Motion patterns, however, can be deceiving as humans can suddenly change their mind based on their own internal mental dynamics and things they spot in the scene
(e.g., children seeing an ice cream van on the other side of the road). Moreover, humans are capable of making reliable predictions of future behaviour even when no motion
is present, just by quickly assessing the ‘type’ of person they are looking at (e.g. an elderly person standing in a hallway is likely to take the elevator rather than the stairs).
Theory of mind (ToM)  capabilities, i.e., the ability to ‘read’ other agent’s mental states, are crucial to develop a next generation, human-centric artificial intelligence.
In a mutually beneficial process, computational models developed within AI may provide new insight on the way these mechanisms work in the human brain.
We support the view that a fruitful cross-fertilisation of neuroscience and machine learning can enable significant advances in both fields, by allowing:
(i) the formulation of computational theory of mind models in humans leveraging current frontier efforts in ML and AI;
(ii) the development of machine theory of mind models informed by the most recent neuroscientific evidence, capable of going beyond simple pattern recognition for
prediction in complex, human-centred scenarios.