US 9,811,722 B2
Multi-camera spatial sign language recognition method and system
Mohamed Mohandes, Dhahran (SA); Mohamed Abdelouaheb Deriche, Dhahran (SA); and Salihu Oladimeji Aliyu, Dhahran (SA)
Assigned to KING FAHD UNIVERSITY OF PETROLEUM AND MINERALS, Dhahran (SA)
Filed by King Fahd University of Petroleum and Minerals, Dhahran (SA)
Filed on Apr. 14, 2017, as Appl. No. 15/488,225.
Application 15/488,225 is a continuation of application No. 14/862,305, filed on Sep. 23, 2015, granted, now 9,672,418.
Claims priority of provisional application 62/113,276, filed on Feb. 6, 2015.
Prior Publication US 2017/0220856 A1, Aug. 3, 2017
This patent is subject to a terminal disclaimer.
Int. Cl. G06K 9/00 (2006.01); G06F 3/01 (2006.01); H04N 5/33 (2006.01); G06K 9/46 (2006.01); G06K 9/62 (2006.01); G06T 19/20 (2011.01); G09B 21/00 (2006.01)
CPC G06K 9/00355 (2013.01) [G06F 3/011 (2013.01); G06K 9/4604 (2013.01); G06K 9/6267 (2013.01); G06T 19/20 (2013.01); G09B 21/009 (2013.01); H04N 5/33 (2013.01)] 17 Claims
OG exemplary drawing
 
1. A method for sign language recognition comprising:
detecting and tracking at least one hand and at least one finger of the at least one hand from at least two different locations in a room by at least two different sensor circuitry;
generating a 3-dimensional (3D) interaction space based on the at least two different Leap Motion Controllers (LMC) each comprising a plurality of IR cameras and a plurality of IR LEDs;
acquiring 3D data related to the at least one detected and tracked hand and the at least one detected and tracked finger;
extracting 3D features associated with the at least one detected and tracked hand and the at least one detected and tracked finger;
analyzing, by processing circuitry a relevance metric related to the extracted 3D features;
classifying, by an analysis classifier circuitry, at least one pattern from each of the at least two different locations based on a fusion of data outputs by the classifying circuitry;
generating a recognized Arabic alphabet sign language letter based on the fusion of the data outputs;
generating a matrix of recognized sign language letters; and
outputting at least one word based on the generated matrix of recognized sign language letters.