Google I/O 2025: Developers, Developers, Developers

Google I/O 2025

Expanding on the updates it announced recently for Android, Google opened its annual I/O conference today with a metric ton of advances for developers that want to take advantage of its AI and platform advances. There was lots to love for individuals, and we’ll get to that. But first, here’s a look at some of the key advances for developers.

“We’re helping you build excellent, adaptive experiences,” Google vice president Matthew McCullough said, “and helping you stay more productive through updates to our tooling that put AI at your fingertips and throughout your development lifecycle”

Key developer announcements from the keynote include:

Adaptive app development now includes Android XR, TV, and cars. Over the past few years, Google has expanded Android’s support for form factors with a focus on big screen devices like foldable phones, tablets, and Chromebooks. This year, it’s expanding its adaptive app development capabilities to include Android XR and cars so developers can create apps that run well and look great everywhere. This work includes the new Material 3 Expressive design language, live updates, widget previews with Glance 1.2, and other advances. Compose for TV is now generally available, and Google says it will bring Gemini capabilities to TV in the fall.

Stitch. This new Google Labs experiment lets you create complex user interfaces and front-end code using a simple natural language prompt in English. This can include details like color palettes and desired user experience, and you can refine designs using image inputs, interactive chat, and theme selectors. You can also use an existing image or wireframe, and Stitch can export its designs to Figma for further work.

Jules. This new asynchronous coding agent works directly with GitHub and works in the background to do busy work like fixing bugs and begin building out new features. It clones your repositories to a Cloud VM and creates pull requests so you can merge the changes you accept back into your projects.

Firebase Studio. This cloud-based AI workspace launched last month and helps developers turn designs into full-stack AI apps. Now, it provides a plugin for importing Figma designs, automatic provisioning of back-ends through Firebase app hosting, Gemini-based image generation capabilities, and integration with Unsplash so you don’t have to use placeholder images during development. There’s also an improved app prototyping agent for testing and building on mobile.

Firebase AI Logic. This is the new evolution of Vertex AI in Firebase, and it helps you integrate generative AI into apps on the client side or via Genkit for cloud-based implementations. There are lots of new features on the client and cloud sides, Unity support for games and Android XR experiences, and Go (beta) and Python (alpha) language support.

New Gemini models. Gemini Diffusion is a state-of-the-art model with 4 to 5 times the performance of comparable models; it’s open today to “trusted testers.” And Lyria RealTime is an experimental interactive music generation model that allows anyone to create, control, and perform music.

New Gemini Nano APIs in Chrome. Google Chrome uses Gemini Nano for on-device tasks, and it’s getting several new APIs in Chrome 138+, including a Summarizer API, Language Detector API, Translator API, and Prompt API for Chrome Extensions, plus a Writer API and Rewriter API in initial trials. Gemini Nano in Chrome also supports Firebase and the Gemini Developer API for hybrid AI solutions.

Chrome DevTools. Chrome’s built-in developer tools are getting AI assistance for debugging style issues in the Elements panel, resolving performance issues in the newly reimagined Performance panel, identifying connectivity issues in the Network panel, and locating source files in the Sources panel.

Carousels improvements. In Chrome 135+, you can now create carousels with fewer lines of CSS and HTML code, and there’s support for new CSS primitives like stylable fragmentation and scroll marker elements.

Gemini Code Assist for Individuals. Google’s free AI coding assistant is now generally available, and it’s powered by Gemini 2.5,. like the paid version. Google says that Gemini Code Assist significantly boosts developers’ odds of success in completing common development tasks by 2.5 times.

Gemini Code Assist for GitHub. Also generally available, Gemini Code Assist for GitHub is also powered by Gemini 2.5 and it integrates directly into GitHub so you can get AI-powered code reviews on pull requests, automatic pull request summaries, ready-to-commit code suggestions, and more.

Android Studio updates. Google’s integrated development environment (IDE) is already integrated with Gemini for AI-powered coding help, but now it’s getting a new Journeys feature for generated test UIs using natural language. Android Studio is also picking up a Version Upgrade Agent that automatically handles dependency updates.

Google AI Studio. Google’s web-based tool for experimenting and customizing AI models is updated with a cleaner UI, integrated documentation, usage dashboards, new apps, and a new Generate Media tab for interacting with the Imagen and Veo models. It also offers a new version of the Gemini 2.5 Flash model that offers stronger performance for coding tasks.

Gemini API. There are new APIs for native audio output, real-time conversations, computer use, and URL context, and the Gemini APIs now support asynchronous function calling for background tasks. The computer use capabilities are based on Google’s work with Project Mariner, and it can do up to 10 simultaneous tasks.

Gemma. Google’s family of multimodal small language models (SLMs) now has a new member, Gemma 3n, that’s optimized for phones, laptops, and tablets and works with audio, text, images, and videos. It can be previewed in Google AI Studio and Google AI Edge, and it will soon be joined by a SignGemma model for translating sign language into spoken language and a MedGemma model for medical text and image use cases.

Wear OS 6. The first developer preview of Wear OS 6 is now available so developers can get started on Material 3 Expressive, new developer tools and watch face APIs, richer media controls, a Credential Manager, and library improvements.

Play improvements. Google is enhancing the Play Store with new topic browse pages that provide focused, relevant and visually engaging places for users to explore and deep-dive on specific topics. And there’s a new Engage SDK for Collections, a new Travel category, and expansion of these new features into more markets.

Colab. Google’s free, Gemini-based web notebook helps students and others experiment with machine learning, data science, and education, and it’s an AI-based reimagining with natural language-based agentic capabilities that will take actions on your notebooks, fix errors, and transform code, iterative querying, and many other improvements.

The Google I/O 2025 keynote was overwhelming, but I’ll have more soon.

Tagged with

Share post

Thurrott