Service robots and other smart devices, such as smartphones, have access to large amounts of data suitable for learning models, which can greatly improve the customer experience. Federated learning is a popular framework that allows multiple distributed devices to train deep learning models remotely, collaboratively, and preserving data privacy. However, little research has been done regarding the scenario where data distribution is non-identical among the participants and it also changes over time in unforeseen ways, causing what is known as concept drift. This situation is, however, very common in real life, and poses new challenges to both federated and continual learning. In this work, we propose an extension of the most widely known federated algorithm, FedAvg, adapting it for continual learning under concept drift. We empirically demonstrate the weaknesses of regular FedAvg and prove that our extended method outperforms the original one in this type of scenario.
Keywords: Federated learning, Continual learning, Nonstationarity, Concept drift