TEES

Research aimed at detecting sign language content wins best paper award at prestigious conference

May 16, 2018

The paper "Comparing Visual, Textual and Multimodal Features for Detecting Sign Language in Video Sharing Sites" by Caio Monteiro, Dr. Frank Shipman and Dr. Ricardo Gutierrez-Osuna from the Department of Computer Science and Engineering at Texas A&M University won the Best Paper Award at the Institute of Electrical and Electronics Engineers (IEEE) International Conference on Multimedia Information Processing and Retrieval.

Locating videos containing sign language is a problem for members of the hearing impaired community. To address this problem, the authors have developed a method to detect sign-language content in videos using machine-learning techniques. The approach consists of extracting features from the video itself (e.g., hand movements) and from metadata (e.g., title, description) attached to videos on YouTube and other video sharing sites.

The paper compares various approaches for extracting metadata information and combining it with visual features. The results indicate that a modular approach that learns a separate classifier for each feature type and them combines the classifier outputs, outperforms monolithic approaches that feed all feature types into a single classification function.

Have a question? Not finding what you're looking for? Let us get you in touch with the right person.

Fill out the form below or give us a call at (979) 458-1648 to reach us.

Contact