
Microsoft and Google both announced the release of new AI models today, but they’re quite different: Microsoft is releasing new foundational MAI models that are only available via its Azure Foundry and US-only MAI Playground platforms, while Google is shipping new Gemma 4 open AI models that can run locally. Moreover, the company is switching to an Apache 2.0 license for these new open models.
Let’s start with Microsoft’s new “world-class” in-house MAI models, and there are three of them:
“We are rapidly deploying these top-tier models to power our own consumer and commercial products,” Microsoft said today. “You’ll see more models from us soon in Foundry and directly in Microsoft products and experiences.”
Let’s move on to Google’s new Gemma 4 open models, which are available under an Apache 2.0 license instead of the company’s previous custom Gemma license. The models are capable of advanced reasoning, agentic workflows, code generation, and vision and audio creation, and they’re available in four variants optimized to run locally, including on “billions of Android devices.”
“Built from the same world-class research and technology as Gemini 3, Gemma 4 is the most capable model family you can run on your hardware. They complement our Gemini models, giving developers the industry’s most powerful combination of both open and proprietary tools,” Google explained today.
The company’s larger 26B and 31B Gemma 4 models are designed to run on consumer GPUs to power IDEs, coding assistants, and agentic workflows. In contrast, the lighter E2B and E4B Gemma 4 models prioritize multimodal capabilities and low-latency processing on mobile and IoT devices, including Raspberry Pi. These models can also run completely offline.
Google’s new Gemma 4 open models can be downloaded from various platforms, including Hugging Face, Kaggle, and Ollama. “These models undergo the same rigorous infrastructure security protocols as our proprietary models,” the company emphasized today.