Thi Duyen Ngo, Duc Hoang Long Nguyen, Hai Long Luong

Main Article Content

Abstract

Sign language is a communication system that encompasses bodily gestures, primarily
utilized within the deaf community. Due to its limited prevalence, information from books, newspapers, and videos is often not translated or represented in sign language. This situation creates
challenges for deaf individuals in acquiring information, as well as in their learning and interactions
with hearing individuals. Historically, the conversion between spoken language and sign language
relied entirely on interpreters, a limited resource that is not always readily available. Currently, employing technology to convert spoken language into sign language presents a modern and convenient
alternative. This linguistic conversion typically involves two steps: first, converting spoken language
into text that adheres to the grammatical structure of sign language; second, representing this text
through the corresponding gestures. This paper proposes a method for representing sign language
using 3D characters to address the latter step. The method constructs a 3D skeleton motion for each
word or phrase from input text in sign language grammar. Subsequently, the motion data of words is
processed and interconnected to animate a 3D virtual character for the complete sentence representation. We have applied the proposed method to represent Vietnamese Sign Language (VSL) using 3D
virtual characters. The results were assessed by experts in sign language, yielding promising findings
that suggest the practical applicability of the proposed methodology.