|
Emerging applications of artificial intelligence are bringing about important paradigm shifts in machine learning and computer vision.
Machines need a comprehensive awareness of what takes place in complex environments, and to be able to use this understanding to make predictions about
other machines' and humans' future behaviour.
In our first work we presented a method to predict an entire ‘action tube’
(a set of temporally linked bounding boxes) in a trimmed video just by observing
a smaller subset of it. Predicting where an action is going to take place in
the near future is essential to many computer vision based applications such as
autonomous driving or surgical robotics. Importantly, it has to be done in realtime
and in an online fashion. We propose a Tube Prediction network (TPnet)
which jointly predicts the past, present and future bounding boxes along with
their action classification scores. At test time TPnet is used in a (temporal) sliding
window setting, and its predictions are put into a tube estimation framework
to construct/predict the video long action tubes not only for the observed part of
the video but also for the unobserved part. Additionally, the proposed action tube
predictor helps in completing action tubes for unobserved segments of the video.
We quantitatively demonstrate the latter ability, and the fact that TPnet improves
state-of-the-art detection performance, on one of the standard action detection
benchmarks - J-HMDB-21 dataset.
|