A Two-Stage Image Frame Extraction Model -ISLKE for Live Gesture Analysis on Indian Sign Language

Hyma, J ; Rajamani, P

Abstract

The new industry revolution focused on Smart and interconnected technologies along with the Robotics and Artificial Intelligence, Machine Learning, Data analytics etc. on the real time data to produce the value-added products. The ways the goods are being produced are aligned with the people’s life style which is witnessed in terms of wearable smart devices, digital assistants, self-driving cars etc. Over the last few years, an evident capturing of the true potential of Industry 4.0 in health service domain is also observed. In the same context, Sign Language Recognition- a breakthrough in the live video processing domain, helps the deaf and mute communities grab the attention of many researchers. From the research insights, it is clearly evident that precise extraction and interpretation of the gesture data along with an addressal of the prevailing limitations is a crucial task. This has driven the work to come out with a unique keyframe extraction model focusing on the preciseness of the interpretation. The proposed model ISLKE deals with a clustering-based two stage keyframe extraction process. It has experimented on daily usage vocabulary of Indian Sign Language (ISL) and attained an average accuracy of 96% in comparison to the ground-truth facts. It is also observed that with the two-stage approach, filtering of uninformative frames has reduced complexity and computational efforts. These key leads, help in the further development of commercial communication applications in order to reach the speech and hearing disorder communities.


Keyword(s)

Classification, Clustering, Featured learning, Image processing, Region of interest, Video summarization

Full Text: PDF (downloaded 751 times)

Refbacks

  • There are currently no refbacks.
This abstract viewed 1222 times