Although artificial intelligence (AI) has improved remarkably over the last years, its inability to deal with fundamental uncertainty
severely limits its application. This proposal re-imagines AI with a proper treatment of the uncertainty stemming from our forcibly
partial knowledge of the world.
As currently practised, AI cannot confidently make predictions robust enough to stand the test of data generated by processes
different (even by tiny details, as shown by ‘adversarial’ results able to fool deep neural networks) from those studied at training
time. While recognising this issue under different names (e.g. ‘overfitting’), traditional ML seems unable to address it in nonincremental
ways. As a result, AI systems suffer from brittle behaviour, and find difficult to operate in new situations, e.g. adapting to
driving in heavy rain or to other road users’ different styles of driving, e.g. deriving from cultural traits.
Epistemic AI’s overall objective is to create a new paradigm for a next-generation artificial intelligence providing worst-case
guarantees on its predictions thanks to a proper modelling of real-world uncertainties.
|