Natalia Díaz-Rodríguez graduated from University of Granada (Spain) in 2010 and got her double PhD from Abo Akademi (Finland) and University of Granada in 2015 on symbolic Artificial Intelligence (on hybrid data-driven and knowledge based human activity modelling and recognition for ambient assisted living). She is Asst. Prof. of Artificial Intelligence at the Autonomous Systems and Robotics Lab at ENSTA Paris, Institute Polytechnique de Paris, and INRIA Flowers team on developmental robotics. Her background is on knowledge representation, reasoning and machine learning. Her current interests include deep, reinforcement and unsupervised learning, open-ended learning, continual/lifelong learning, (state) representation learning, neural-symbolic computation, computer vision, autonomous systems, explainable AI, and AI for social good. She has worked on R&D at CERN, Philips Research, University of California Santa Cruz, and in industry in Silicon Valley at Stitch Fix Inc. (San Francisco, CA). She has participated in a range of international projects (e.g. EU H2020 DREAM www.robotsthatdream.eu) and was Management Committee member of EU COST Action AAPELE.EU (Algorithms, Architectures and Platforms for Enhanced Living Environments), Google Anita Borg Scholar 2014, Heidelberg Laureate Forum 2014 & 2017 fellow, and Nokia Foundation fellow. She is co-founder and board member of the non-profit continualAI.org organisation on Continual Learning.
In this talk I will talk about different dimensions leading to Never-Ending Learning (NEL). First, I will present the DREAM (Deferred Restructuring of Experience in Autonomous Machines) )architecture, a result of a 4 years EU project, where we tackled Open Ended Learning and proposed a Developmental Approach to Open-Ended Learning in Robotics. This is a yet more challenging setting that considers more areas than the ones traditionally considered in Continual Learning (CL). Later I will talk about the paradigm of eXplainable AI (XAI) as a dimension to lead to NEL and Responsible AI, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overviews presented establish a novel definition of explainable Machine Learning that covers prior conceptual propositions with a major focus on the audience for which the explainability is sought. We propose taxonomies aimed at classifying methods for explaining Deep Learning and Reinforcement Learning models. Finally we show some multi-agent RL examples and an explainability set up in two use cases, learning both symbolic and deep representations with domain expert knowledge graphs, and learning models where the human in the loop can offer help while training. Reference papers: https://www.sciencedirect.com/science/article/pii/S0950705120308145 https://www.sciencedirect.com/science/article/pii/S1566253519308103 https://arxiv.org/abs/2104.11914 https://arxiv.org/abs/1910.10045 https://arxiv.org/abs/2006.00882 https://arxiv.org/abs/2008.06693
Dr. Natalia Diaz Rodrigues
ENSTA ParisTech, France