Contrastive Learning for Boosting Knowledge Transfer in Task-Incremental Continual Learning of Aspect Sentiment Classification Tasks
Main Article Content
Abstract
Abstract: Continual learning (CL) aims to learn a sequence of tasks, with task datasets emerging
incrementally over time, and without a predetermined number of tasks. CL models strive to achieve
two primary objectives: preventing catastrophic forgetting and facilitating knowledge transfer between tasks. Catastrophic forgetting refers to the sharp decline in the performance of CL models
on previously learned tasks as new ones are learned. Knowledge transfer, which leverages acquired
knowledge from previous tasks, empowers the CL model to adeptly tackle new tasks. However, only
a few CL models proposed by far successfully achieve those two objectives simultaneously. In this
paper, we present a task-incremental CL based model that leverages a pre-trained language model
(i.e., BERT) with injected CL-plugins to mitigate catastrophic forgetting in continual learning. Additionally, we propose the utilization of two contrastive learning-based losses, namely contrastive
ensemble distillation (CED) and contrastive supervised learning of the current task (CSC) losses,
to enhance our model’s performance. The CED loss improves the knowledge transferability of our
continual learning model, while the CSC loss enhances its performance for the current learning task.
Experimental results on benchmark datasets demonstrate that our proposed model outperforms all
existing continual learning models in the task-incremental learning setting for continual aspect sentiment classification.
Keywords: Continual Learning, Contrastive Learning, Aspect-Sentiment Classification.