
Google announced today that its latest Gemini 2.0 family of AI models is available to most customers across all its relevant offerings.
“Our Gemini 2.0 lineup was built with new reinforcement learning techniques that use Gemini itself to critique its responses,” Google Deepmind CTO Koray Kavukcuoglu writes. “This resulted in more accurate and targeted feedback and improved the model’s ability to handle sensitive prompts, in turn. We’re also leveraging automated red teaming to assess safety and security risks, including those posed by risks from indirect prompt injection, a type of cybersecurity attack which involves attackers hiding malicious instructions in data that is likely to be retrieved by an AI system. As the Gemini model family becomes more capable, we’ll continue to invest in robust measures that enable safe and secure use.”
Google launched the Gemini 2.0 wave in December, on the one-year anniversary of the Gemini 1.0 announcement. It is promoting the Gemini 2.0 models as being “agentic,” meaning that they can enable the creation of AI agents that will work on behalf of the user non-interactively and then report back as needed. Since then, it updated the Gemini app with its Gemini Flash 2.0 and Imagen models, providing multimodal generative AI capabilities.
Now, Gemini 2.0 is broadly available across the Google ecosystem.
Gemini app users can access 2.0 experimental models like 2.0 Flash Thinking Experimental, which provides a reasoning interface similar to DeepSeek that “shows its thought process so you can see why it responded in a certain way, what its assumptions were, and trace the model’s line of reasoning.” There is also a version of the 2.0 Flash Thinking model that can interact with Google Maps, Search, and YouTube. And Gemini Advanced subscribers now have priority access to an experimental version of Gemini 2.0 Pro that Google says excels at “complex tasks, providing better factuality and stronger performance for coding and math prompts.”
The Gemini app improvements are available to those with personal Gmail accounts or Gemini Advanced subscriptions. These capabilities will come to Workspace customers soon. (Workspace customers can currently access the Gemini 1.5 models.)
In addition to the recently released Gemini 2.0 Flash Thinking Experimental reasoning model, developers now have access to three new models, Gemini 2.0 Flash, with higher rate limits, stronger performance, and simplified pricing; Gemini 2.0 Flash-Lite, a new variant of Google’s most cost-efficient model yet (in public preview); and Gemini 2.0 Pro, an experimental update to Google’s best model yet for coding and complex prompts.
Additionally, Gemini 2.0 Flash-Lite is available in Google AI Studio and Vertex AI in public preview. And Google notes that Gemini 2.0 delivers a one million context window, multimodal input support, and text, image, and audio output capabilities. A Multimodal Live API is coming soon.
Developers can learn more here.
“Gemini 2.0′ advances in multimodality and native tool use enable us to build new AI agents that bring us closer to our vision of a universal assistant,” Google CEO Sundar Pichai said during his firm’s post-earnings conference call yesterday. “The progress to scale thinking has been super-fast, and the reviews so far have been extremely positive. We are working on even better thinking models and look forward to sharing those with the developer community soon.”