Federated Learning from Demonstration for Active Assistance to Smart Wheelchair Users

Learning from Demonstration (LfD) is a very appealing approach to empower robots with autonomy. Given some demonstrations provided by a human teacher, the robot can learn a policy to solve the task without explicit programming. A promising use case is to endow smart robotic wheelchairs with active assistance to navigation. By using LfD, it is possible to learn to infer short-term destinations anywhere, without the need of building a map of the environment beforehand. Nevertheless, it is difficult to generalize robot behaviors to environments other than those used for training. We believe that one possible solution is learning from crowds, involving a broad number of teachers (the end users themselves) who perform demonstrations in diverse and real environments. To this end, in this work we consider Federated Learning from Demonstration (FLfD), a distributed approach based on a Federated Learning architecture. Our proposal allows the training of a global deep neural network using sensitive local data (images and laser readings) with privacy guarantees. In our experiments we pose a scenario involving different clients working in heterogeneous domains. We show that the federated model is able to generalize and deal with non Independent and Identically Distributed (non-IID) data.

keywords: Federated learning, Human-robot interaction, learning from demonstration, smart wheelchair