Google has officially launched Gemma 4 artificial intelligence open-source models, with a comprehensive suite of high-end features for optimised workflow and advanced reasoning.
What is Gemma 4?
A set of new AI-powered open models, which is particularly designed to handle complex reasoning, coding and real-world tasks with ease.
Gemma 4 is natively multimodal, with the ability to generate code, process images and videos, and perform a variety of tasks with greater efficiency.
On Thursday, the Alphabet-owned Google wrote in a blog post, “Purpose-built for advanced reasoning and agentic workflows, Gemma 4 delivers an unprecedented level of intelligence-per-parameter. This breakthrough builds on incredible community momentum: since the launch of our first generation, developers have downloaded Gemma over 400 million times, building a vibrant Gemmaverse of more than 100,000 variants.”
Gemma 4 models
The recently introduced Gemma 4 is available in four sizes, including Effective 2B (E2B), Effective 4B (E4B), 26B Mixture of Experts (MoE), and 31B Dense.
The Gemma 4 model family is integrated with the top-notch technology, which is capable of handling complex logic and agentic workflows, with the help of the largest model like the 31B model, currently ranking as the #3 open models in the world, as per the Arena AI text leaderboard.
Moreover, the 26GB model secured the sixth spot.
Due to its cutting-edge technology, the models ensure efficiency and are able to outdo models 20 times its size.
The 26GB and 31GB models are able to run on PCs, and the unquantised bfloat16 weights fit efficiently on a single 80 GB Nvidia (NVDA) H100 GPU.
Gemma 4 E2B and E4B models are streamlined for compute and memory efficiency and are compatible with handsets, Raspberry Pi, and Nvidia and Jetson Orin Nano.