Volume 22, Issue 1 (March 2026)                   IJEEE 2026, 22(1): 3717-3717 | Back to browse issues page


XML Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Ha M, Nguyen D, Dinh T, Tien-Tam T, Thanh D T, Tzyh-Chiang Chen O. Deep Learning Based Graph Convolutional Network Using Hand Skeletal Points For Vietnamese Sign Language Classification. IJEEE 2026; 22 (1) :3717-3717
URL: http://ijeee.iust.ac.ir/article-1-3717-en.html
Abstract:   (191 Views)

This paper develops a robust and efficient method for the classification of Vietnamese Sign Language gestures. The study focuses on leveraging deep learning techniques, specifically a Graph Convolutional Network (GCN), to analyze hand skeletal points for gesture recognition. The Vietnamese Sign Language custom dataset (ViSL) of 33 characters and numbers, conducting experiments to validate the model's performance, and comparing it with existing architectures. The proposed approach integrates multiple streams of GCN, based on the lightweight MobileNet architecture. The custom dataset is preprocessed to extract key skeletal points using Mediapipe, forming the input for the multiple GCN. Experiments were conducted to evaluate the proposed model's accuracy, comparing its performance with traditional architectures such as VGG and ViT. The experimental results highlight the proposed model superior performance, achieving an accuracy of 99.94% test on the custom ViSL dataset, reach accuracy of 0.993% and 0.994% on American Sign Language (ASL) and ASL MINST dataset, respectivly. The multi-stream GCN approach significantly outperformed traditional architectures in terms of both accuracy and computational efficiency. This study demonstrates the effectiveness of using multi-stream GCNs based on MobileNet for ViSL recognition, showcasing their potential for real-world applications.

Full-Text [PDF 2146 kb]   (58 Downloads)    
Type of Study: Research Paper | Subject: Deep Learning
Received: 2025/01/28 | Revised: 2025/12/10 | Accepted: 2025/09/15

Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Creative Commons License
© 2022 by the authors. Licensee IUST, Tehran, Iran. This is an open access journal distributed under the terms and conditions of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.