Ta Bao Thang, Dang Dinh Son, Le Dang Linh, Dang Xuan Vuong, Duong Quang Tien

Main Article Content

Abstract

End-to-end models have significant potential in most languages and recently proved the robustness in ASR tasks. Many robust architectures are proposed, and among many techniques, Recurrent Neural Network - Transducer (RNN-T) shows remarkable success. However, with background noise or reverb in spontaneous speech, this architecture generally suffers from high deletion error problems. For this reason, we propose the blank label re-weighting technique to improve the state-of-the-art Conformer transducer model. Our proposed system adopts the Stochastic Weight Averaging approach, stabilizing the training process. Our work achieved the first rank with a 4.17% of word error rate in Task 2 of the VLSP 2021 Competition.