1,690
10
Essay, 10 pages (2500 words)

Sign language recognition using combinational features

Introduction

The idea is to make computers understand human languages and develop a user friendly human computer interfaces (HCI). Making a computer appreciate speech, facial and human gestures are some steps towards it. Human-Computer Interaction (HCI) is the study of interface between users and computers. Gestures are the non-verbally substitute information out of source of information. A person can perform innumerable gestures at a time. A gesture is a movement of the body that expresses an idea or attitude. Hand gestures are a collection of movements of the hand and arm that vary from the static state of pointing at something to the dynamic state. Nowadays hand gesture recognition became an interesting topic in several area like is computer society or electronics. Hand gestures recognition system has been applied for different applications on different domains including sign language, virtual environments, smart surveillance, robot control, medical systems. In sign language, each gesture has an assigned meaning. A special device is designed to make effective communication between the dumb and visually impaired person. In an emergency situation and in need of any usable things, the dumb person cannot communicate, for this purposes this system has been proposed. The visually impaired person cannot visualize things but they can communicate with the voice whereas for the dumb person, they can visualize but they can’t speak. When these two persons are living in a same room, there won’t be any interaction between them and communication lags. According to dumb people for each gesture there will be a separate meaning whose message templates will be taken and kept in a database. The gestures are taken as a snap and that were compared to gestures which are stored in database. First the images are taken as a snap and compared with gestures stored in the database and finally for the matched gestures, the voice will be generated.

Current Scenario in Sign Language Recognition

Jebali, M., Dalle, P., & Jemni, M. [1] proposed modeling a sign language recognition system based on pre-diction in the context of dialogue between the system (avatar) and the interlocutor, to make a ludic application. The main recognition method include an empirical tracking method which dynamically changed according to each stage of the dialogue. To succeed in the tracking of hands and heads, video treatments and linguistic models are developed. Here, particle filter algorithm is used for non-linear problems to remove noise, occlusions, background complexities and fast dynamic changes. Yet occlusions is not possible to handle accurately between colored target because it disregard the spatial information. Kumar. S., & Kaurav. A. [2] Stationary hand gesture recognition can be applied in a variety of domains such as human-computer interaction (HCI), remote control, virtual reality etc. This work is focused on three main issues in developing a gesture recognition system. These are (i) Threshold (ii) illumination normalization (iii) user independent gesture recognition based on fusion of Moments. A semi-supervised learning algorithm has been employed based on modified K-means clustering and Mahalanobis distance to extract the skin color region from the captured hand gesture images. All operations are performed on RGB colour images. We have made two hand gesture databases. In case of first database, a uniform black background is used behind the user to avoid background noises. Second one is taken in complex background.

Shen, C., Chen, Y., Yang, G., & Guan, X. [3] It investigates about the feasibility and applicability on the usage of wristband-interaction behavior for recognizing hand-dominated activities, with the advantage of great compliance and long wearing time. For each action, sensor data from wristband are analyzed to obtain kinematic sequences. The sequences are then depicted by statistics-, frequency-and wavelet-domain features for providing accurate and fine-grained characterization of hand-dominated actions, and the correlation between the wristband-sensor features and the actions is analyzed. Classification techniques (Naive Bayes, nearest neighbor, neural network, support vector machine, and Random Forest) are applied to the feature space for performing hand-dominated activity recognition.

Normani, N., Urru, A., Abraham, L., Walsh, M., Tedesco[4] The hardware relies upon the combination of a stereoscopic vision of two novel Lensless Smart Sensors (LSS) combined with IR filters and five hand-held LEDs to track. Tracking common gestures generates a six-gestures dataset, which is then employed to train three Machine Learning models: k-Nearest Neighbors, Support Vector Machine and Random Forest. An offline analysis highlights how different LEDs’ positions on the hand affect the classification accuracy. The Multi-Point Tracking algorithm can be structured into two main phases: calibration and tracking. A Calibration Phase – It is assumed that the hand is kept still in front of the sensors with the palm and fingers open and the middle finger pointing upwards. The calibration is performed once at system start to capture the LEDs relative distances from each other and use them as a reference in the Tracking Phase. Tracking Phase- It explains by continuously iterating the algorithm every time a new frame is captured.

Kumar, P., Rautaray, S. S., &Agrawal[5] A real-time Human-Computer Interaction based on the hand data glove and k-NN classifier for gesture recognition is proposed in[]. HCI is becoming more and more natural and intuitive to be used. The important part of body that is hand is most frequently used as interaction in digital environment and thus complexity and flexibility of motion of hand is a research topic. To recognize hand gesture accurately and successfully data glove is used. Here, glove is used to capture current position and angle of hand and fingers, and further classify it using k-NN classifier. The gestures classified are clicking, dragging, rotating, pointing and ideal position. Starner, T., Weaver. J.,& Pentland, A.[6] It presents two real-time hidden Markov model-based systems for recognizing sentence-level continuous American Sign Language (ASL) using a single camera to track the user’s unadorned hands. The first system observes the user from a desk mounted camera.

The second system mounts the camera in a cap worn by the user. Both experiments use a 40-word lexicon. The first experimental situation explored was the second person view: a desk-based recognizer. The second experiment demonstrates how a wearable computer might use this method as part of an ASL to English translator. Mankar, S. M., & Chhabria, S. A.[7] Investigated about how to track the movement of the hand and how to recognize the click gesture to implement a new type of user interface. They developed a wristwatch-type human–computer interface (HCI) device that can estimate and express the user’s intuitive hand movements based on a 9-axis inertial measurement unit (IMU) sensor, which includes an accelerator, a magnetometer, and a gyroscope. This work defined the Euler angular projection function to map the hand angle intuitively on the screen and to represent its motion reliably. They also proposed a machine-learning based gesture-recognition algorithm by extracting the window size optimized for the click gesture and collecting the accurate ground truth in a real computing environment with noise. Finally, they designed a natural user interface, which is robust to the actual environment, by integrating hand motion tracking and click gesture recognition. They proved the reliability of motion by comparing the proposed hand motion-tracking function with the conventional method. Mesbahi, S. C., Mahraz, M. A., Riffi, J., &Tairi, H.[8] presents a method for hand gesture recognition using convexity defect and background subtraction. First, the background subtraction is used to eliminate the useless information. To find contour of segmented hand images they used images processing techniques. After that they calculated the convex hull and convexity defects for this contour. The feature extraction purposes to detect and extract features that can be used to determine the significance of a given hand gesture. The features must be able to characterize gesture only, and invariant under translation and rotation of hand gesture to ensure reliable recognition. They proposed a method to extract a series of features based on convex defect detection, catching advantage of the close relationship of convex defect and fingertips. This method is mere, efficient and free from gesture direction and position. They have tested five hand gestures classes to show using one, two, three, four, and five fingers one by one.

Sang, Y., Shi, L., & Liu, Y.[9] proposed a micro hand gesture recognition system and methods using ultrasonic active sensing. This system uses micro dynamic hand gestures for recognition to achieve human–computer interaction (HCI). The implemented system, called hand-ultrasonic gesture (HUG), consists of ultrasonic active sensing, pulsed radar signal processing, and time-sequence pattern recognition by machine learning. They adopted lower frequency (300kHz) ultrasonic active sensing to obtain high resolution range-Doppler image features. Using high quality sequential range-Doppler features, they proposed a state-transition-based hidden Markov model for gesture classification. Ren, Z., Yuan, J., Meng, J., & Zhang, Z.[10] The recently developed depth sensors, e. g., the Kinect sensor, have provided new opportunities for human-computer interaction (HCI).

Although great progress has been made by leveraging the Kinect sensor, e. g., in human body tracking, face recognition and human action recognition, robust hand gesture recognition remains an open problem. Compared to the entire human body, the hand is a smaller object with more complex articulations and more easily affected by segmentation errors. It is thus a very challenging problem to recognize hand gestures. This work focuses on building a robust part-based hand gesture recognition system using Kinect sensor. To handle the noisy handshapes obtained from the Kinect sensor, they proposed a novel distance metric, Finger-Earth Mover’s Distance (FEMD), to measure the dissimilarity between hand shapes. As it only matches the finger parts while not the whole hand, it can better distinguish the hand gestures of slight differences. The extensive experiments demonstrate that this hand gesture recognition system is accurate.

Research Direction

The various researches made on hand gestures contributes some features for the future direction in hand gestures though these systems have some inaccurate results.

Author Method Accuracy

Jebali, M., Dalle, The objective is to bring the HCI performance nearby the 70% P., & Jemni. M.[1] human-human interaction, by modeling a sign language recognition system based on pre- diction in the context of dialogue between the system (avatar) and the interlocutor, to make a ludic application. The main recognition method include an empirical tracking method which dynamically changed according to each stage of the dialogue. Kumar. S.,& Pixel conversion of hand is very demanding to segment hand 80. 6% Kaurav. A.[2] regions from the static hand gesture color images, due to unreliable light conditions and composite background. Since skin pixels can vary with different explanation condition, to find the range of skin pixels, becomes a hard task in case of color space based skin color segmentation. This proposes a learning algorithm on clustering.

Shen, C., Chen, Y., For each action, sensor data from wristband are analyzed to 90% Yang, G., & Guan, obtain kinematic sequences. The sequences are then depicted X.[3] by statistics-, frequency-, and wavelet-domain features for providing accurate and fine-grained characterization of hand-dominated actions, and the correlation between the wristband-sensor features and the actions is analyzed. Classification techniques are applied to the feature space for performing hand-dominated activity recognition.

Normani, N., Urru, Hand motion tracking traditionally requires highly complex 86% A., Abraham, L., and expensive systems in terms of energy and computational Walsh, M., demands. A low-power, low-cost system could lead to a Tedesco [4] revolution in this field. The hardware relies upon the combination of a stereoscopic vision of two novel Lensless Smart Sensors (LSS) combined with IR filters and five hand-held LEDs to track. Kumar. P., In this paper, a real-time Human-Computer Interaction based 75%-80% Rautaray. S. S., on the hand data glove and k-NN classifier for gesture &Agrawal. A[5] recognition is proposed. Here, glove is used to capture current position and angle of hand and fingers, and further classify it using k-NN classifier. Starner, T., American Sign Language (ASL) using a single camera to 80%-90% Weaver. J.,& track the user’s unadorned hands. The first system observes Pentland, A.[6] the user from a desk mounted camera and achieves 80 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 90 percent accuracy. Mankar, S. M., They developed a wristwatch-type human–computer 90. 4% &Chhabria, S. A.[7] interface (HCI) device that can estimate and express the user’s intuitive hand movements based on a 9-axis inertial measurement unit (IMU) sensor, which includes an accelerator, a magnetometer, and a gyroscope. Mesbahi, S. C., This paper presents a method for hand gesture recognition 80% Mahraz, M. A., Riffi, using convexity defect and background subtraction. First, the J., &Tairi, H.[8] background subtraction is used to eliminate the useless information. Sang, Y., Shi, L., & This proposes a micro hand gesture recognition system and 75% due to Liu, Y.[9] methods using ultrasonic active sensing. This system uses ultrasonic micro dynamic hand gestures for recognition to achieve sensing human–computer interaction (HCI). The implemented system, called hand-ultrasonic gesture (HUG), consists of ultrasonic active sensing, pulsed radar signal processing, and time-sequence pattern recognition by machine learning. Ren, Z., Yuan, J., The Kinect sensor, have provided new opportunities for 85% Meng, J., & Zhang, human-computer interaction (HCI). As it only matches the Z.[10] finger parts while not the whole hand, it can better distinguish the hand gestures of slight differences.

Conclusion

Human hand gestures provide the most important means for non-verbal interaction among people. At present, artificial neural networks are emerging as the technology of choice for many applications, such as pattern recognition, gesture recognition, prediction, system identification and control. ANN provides good and powerful solution for gestures recognition in MATLAB. The ability of neural networks to generalize makes them a natural for gesture recognition. Gesture recognition is very challenging and interesting task in terms of accuracy and usefulness in computer vision. Rotation, illumination change, background variations and pose variations of hands makes the problems are more challenging. Most important advantage is that physically challenged persons can efficient interact without any physical restriction. The implementation of the proposed system aims to translate gestures into speech (voice).

References

  1. Jebali, M., Dalle, P., & Jemni, M. (2014, June). Sign Language Recognition System Based on Prediction in Human-Computer Interaction. In International Conference on Human-Computer Interaction (pp. 565-570). Springer, Cham.
  2. Kumar, S., & Kaurav, A. (2018, January). Hand gesture through geometric moments (HCI based). In 2018 2nd International Conference on Inventive Systems and Control (ICISC) (pp. 561-565). IEEE.
  3. Shen, C., Chen, Y., Yang, G., & Guan, X. (2018). Toward Hand-Dominated Activity Recognition Systems With Wristband-Interaction Behavior Analysis. IEEE Transactions on Systems, Man, and Cybernetics: Systems.
  4. Normani, N., Urru, A., Abraham, L., Walsh, M., Tedesco, S., Cenedese, A., …&O’Flynn, B. (2018). A Machine Learning Approach for Gesture Recognition with a Lensless Smart Sensor System.
  5. Kumar, P., Rautaray, S. S., &Agrawal, A. (2012, March). Hand data glove: A new generation real-time mouse for Human-Computer Interaction. In Recent Advances in Information Technology (RAIT), 2017 1st International Conference on (pp. 750-755). IEEE.
  6. Starner, T., Weaver, J., &Pentland, A. (1998). Real-time american sign language recognition using desk and wearable computer based video. IEEE Transactions on pattern analysis and machine intelligence, 2016, 1371-1375.
  7. Mankar, S. M., &Chhabria, S. A. (2015, January). Review on hand gesture based mobile control application. In Pervasive Computing (ICPC), 2015 International Conference on (pp. 1-2). IEEE.
  8. Mesbahi, S. C., Mahraz, M. A., Riffi, J., &Tairi, H. Hand gesture recognition based on convexity approach and background subtraction.
  9. Sang, Y., Shi, L., & Liu, Y. (2018). Micro hand gesture recognition system using ultrasonic active sensing. IEEE Access, 6, 49339-49347.
  10. Ren, Z., Yuan, J., Meng, J., & Zhang, Z. (2013). Robust part-based hand gesture recognition using kinect sensor. IEEE transactions on multimedia, 15(5), 1110-1120.
Thank's for Your Vote!
Sign language recognition using combinational features. Page 1
Sign language recognition using combinational features. Page 2
Sign language recognition using combinational features. Page 3
Sign language recognition using combinational features. Page 4
Sign language recognition using combinational features. Page 5
Sign language recognition using combinational features. Page 6
Sign language recognition using combinational features. Page 7
Sign language recognition using combinational features. Page 8
Sign language recognition using combinational features. Page 9

This work, titled "Sign language recognition using combinational features" was written and willingly shared by a fellow student. This sample can be utilized as a research and reference resource to aid in the writing of your own work. Any use of the work that does not include an appropriate citation is banned.

If you are the owner of this work and don’t want it to be published on AssignBuster, request its removal.

Request Removal
Cite this Essay

References

AssignBuster. (2022) 'Sign language recognition using combinational features'. 30 June.

Reference

AssignBuster. (2022, June 30). Sign language recognition using combinational features. Retrieved from https://assignbuster.com/sign-language-recognition-using-combinational-features/

References

AssignBuster. 2022. "Sign language recognition using combinational features." June 30, 2022. https://assignbuster.com/sign-language-recognition-using-combinational-features/.

1. AssignBuster. "Sign language recognition using combinational features." June 30, 2022. https://assignbuster.com/sign-language-recognition-using-combinational-features/.


Bibliography


AssignBuster. "Sign language recognition using combinational features." June 30, 2022. https://assignbuster.com/sign-language-recognition-using-combinational-features/.

Work Cited

"Sign language recognition using combinational features." AssignBuster, 30 June 2022, assignbuster.com/sign-language-recognition-using-combinational-features/.

Get in Touch

Please, let us know if you have any ideas on improving Sign language recognition using combinational features, or our service. We will be happy to hear what you think: [email protected]