作者
Kasian Myagila
机构
The University of Dodoma
简介
Sign language has been used by Speech impaired people for communication purposes. Despite being an effective form of communication for speech impaired people, still there is a challenge for people who are unaware of sign language especially those with no such impairment to communicate with speech impaired people. Since Sign Language is a visual based language, several machine learning techniques have been used in sign language translation for better performance results. However, sign languages are different and no study has been found in Tanzania Sign Language, which is a language used by speech impaired people in Tanzania. Moreover, no study has revealed whether there is significant difference in performance between Support Vector Machine and Convolution Neural Network despite the fact that literature show that both have significant performance in different sign languages. This study aimed at comparing the performance of Support Vector Machine and Convolution Neural Network on translating Tanzania Sign Language through image recognition. The study employed Tanzania Sign Language images as datasets whereby 30 words were chosen from the context of education. The study used dataset of 3000 images that were taken using a camera. To reduce the dimension of datasets, the study adopted Principal Component Analysis to perform feature extraction. Furthermore, the study employed a Combined 5x2cv F test to compare the techniques to determine the significant difference in the performance of the algorithms. The findings revealed that both techniques have significant rate of both accuracy, precision …