Google Launches Gemini 2.0: A New AI Model for the Agentic Era

Google Unveils Gemini 2.0: A New AI Model for the Agentic Era
Google has announced the launch of Gemini 2.0, the latest iteration of its artificial intelligence (AI) model. Designed for what Google calls the “agentic era,” Gemini 2.0 introduces advanced multimodal capabilities, enabling it to interact, reason, and take proactive actions across a range of tasks. Building on its predecessors, Gemini 1.0 (introduced last December) and Gemini 1.5, the new model further advances multimodality and long-context understanding to process information across text, video, images, audio, and code.

  • Make Telecom Talk My Trusted Source
  • Source of Google
  • Source of Google

Also Read: Google, Intersect Power and TPG Rise Climate Partner to Power AI Data Centers with Clean Energy

“Information is at the core of human progress. It’s why we’ve focused for more than 26 years on our mission to organise the world’s information and make it accessible and useful. And it’s why we continue to push the frontiers of AI to organise that information across every input and make it accessible via any output, so that it can be truly useful for you,” said Sundar Pichai, CEO of Google and Alphabet.

Available to Developers and Testers

Gemini 2.0 Flash is now available as an experimental model to developers through the Gemini API in Google AI Studio and Vertex AI. Google aims to quickly integrate it into products like Gemini and Search. Starting December 11, Gemini 2.0 Flash will be accessible to all Gemini users.

Introducing Deep Research

Google also unveiled Deep Research, a feature leveraging advanced reasoning and long-context capabilities to act as a research assistant. It explores complex topics and compiles reports on behalf of users. This feature is available within Gemini Advanced.