Beyond Catastrophic Forgetting in Continual Learning: An Attempt with SVM
Benavides Prado, D
Abstract
A big challenge in continual learning is avoiding catastrophically forgetting previously learned tasks. The possibility of improving existing knowledge whilst integrating new information has been much less explored. In this paper we describe a method that aims to improve the performance of previously learned tasks by refining their knowledge while new tasks are observed. Our method is an example of this ability in the context of Support Vector Machines for binary classification tasks, which encourages retention of existing knowledge whilst refining. Experiments with synthetic and real-world datasets show that the performance of these tasks can be continually improved by transferring selected knowledge, leading to the improvement on the performance of the learning system as a whole.