Ngo Dinh Luan, Ngo Le Hieu Kien, Dang Van Thin, Duong Ngoc Hao, Nguyen Luu Thuy Ngan

Main Article Content

Abstract

In this paper, we describe an empirical study of data augmentation techniques with various pre-trained language models on the bilingual dataset which was presented at the VLSP 2021 - Vietnamese and English-Vietnamese Textual Entailment. We apply the machine translation tool to generate new training set from original training data and thenĀ  investigate and compare the effectiveness of a monolingual and multilingual model on the new data set. Our experimental results show that fine-tuning a pre-trained multilingual language XLM-R model with an augmented training set gives the best performance. Our system was ranked third in the shared-task VLSP 2021 with theĀ  F1-score of about 0.88.