
Google has officially unveiled “SignGemma,” a new artificial intelligence (AI) model that can translate sign language into spoken text.
The Mountain View-based tech giant revealed on Wednesday, May 28, 2025, that the model, which will be part of the Gemma series of models, is currently being tested and is expected to be launched later this year.
SignGemma will be an open-source AI model, available to individuals and businesses.
What to expect from SignGemma?
SignGemma can understand hand movements and facial expressions
Taking to X (formerly Twitter), the official handle of Google DeepMind shared a demo of the SignGemma and some details about its release date.
It was also briefly showcased at the Google I/O event by Gus Martin, Gemma Product Manager at DeepMind.
During the showcase, Martins highlighted that the AI model provides text translation from sign language in real time, making face-to-face communication seamless.
To note, it performs the best with the American Sign Language (ASL) when translating it into the English language.
According to MultiLingual, SignGemma can function without requiring the Internet. This makes it suitable to use in areas with limited connectivity.
It is said to be built on the Gemini Nano framework and uses a vision transformer to track and analyse hand movements, shapes, and facial expressions.
DeepMind stated it as “our most capable model for translating sign language into spoken text.”