Skip to Content

LLMS SPEAK IN SIGNS TOO: BRIDGING THE COMMUNICATION GAP

December 1, 2025
José Rosales

Recent advances in Natural Language Processing (NLP) have revolutionized access to automated language understanding, largely thanks to zero-shot learning capabilities. These breakthroughs have quickly expanded into multimodal input processing—whether it’s images, sound, or video—positioning LLMs as promising universal models capable of building meaningful representations regardless of content modality or format.

However, while NLP and computer vision have progressed rapidly, other essential communication systems—particularly those used in non-verbal contexts—have lagged behind. Sign Language Translation (SLT), for instance, remains a challenge for modern machine learning approaches due to limited data availability and region-specific variations.

In recent years, researchers in the SLT community have developed innovative methods to address these challenges. Notably, LLM-based architectures are being used to bootstrap semantic and syntactic knowledge from spoken language, helping to close the gap that has long made SLT elusive. These new approaches leverage pretrained models across text, image, and video to align visual and linguistic data, establishing visuo-semantic representations that could power the sign language interpreters of the future.

Architectures like SignLLM1 and the more recent SpaMo2 have opened new avenues for SLT to benefit directly from advancements in NLP, potentially transforming sign language processing through cross-modal learning.

  1. Jia Gong, Lin Geng Foo, Yixuan He, Hossein Rahmani, and Jun Liu. 2024. LLMs are good sign language translators. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18362–18372. ↩︎
  2. Eui Jun Hwang, Sukmin Cho, Junmyeong Lee, and Jong C Park. 2025. An efficient sign language translation using spatial configuration and motion dynamics with LLMs. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics, pages 3901–3920. ↩︎

About the author

R&D Project Manager, ML Engineer | France
José Rosales completed a graduate program in Data Science and Pattern Classification before earning a Ph.D. in Machine Learning applied to Natural Language Processing at LISN and Inria Paris.

Leave a Reply

Your email address will not be published. Required fields are marked *

Slide to submit