Don’t miss the latest developments in business and finance.

Google reinforces AI with Gemini 2.0 model, deep research mode, and more

Google has unveiled its Gemini 2.0 Flash model for all Gemini users, introducing capabilities such as native multimodal output

Gemini 2.0 model (Image: Google)
Gemini 2.0 model (Image: Google)
Harsh Shivam New Delhi
3 min read Last Updated : Dec 12 2024 | 12:05 PM IST
Google launched its second-generation Gemini AI model, stating that it could pave the way for new AI agents. The Gemini 2.0 model features advancements in multimodality with native image and audio output, introduces tools, and offers more functionalities. Describing the new model, Google Chief Executive Officer Sundar Pichai said, “If Gemini 1.0 was about organising and understanding information, Gemini 2.0 is about making it much more useful.”
 
In addition to the new model, Google introduced a feature called Deep Research. This mode uses advanced reasoning and long-context capabilities to function as a research assistant, exploring complex topics and compiling reports on the user’s behalf.
 
Gemini 2.0: Availability
 
Google announced that Gemini 2.0 is being rolled out for developers and “trusted testers” and will soon be integrated into products starting with Gemini and Google Search. An experimental Gemini 2.0 Flash model is available to all users on the web version of Gemini AI, with app integration expected soon.
 
The company is also testing the Gemini 2.0 model in Search’s AI Overviews for limited users.
 
Gemini 2.0 Flash: Details

More From This Section

 
Gemini 2.0 Flash is the first publicly available model in the Gemini 2.0 series. Google described it as the workhorse of the series, offering low latency and enhanced performance. The model outperforms the flagship Gemini 1.5 Pro on key benchmarks.
 
It supports multimodal inputs like images, video, and audio, along with multimodal outputs, including natively generated images, mixed text, and multilingual text-to-speech audio. Gemini 2.0 Flash can also utilise tools such as Google Search, user-defined functions, and execute code.
 
Developers can access Gemini 2.0 Flash as an experimental model via the Gemini API in Google AI Studio and Vertex AI.
 
AI agents with Gemini 2.0 Flash
 
The Gemini 2.0 Flash model introduces multimodal reasoning, long-context understanding, and native tools, enabling agentic experiences. Google has showcased new prototypes for AI agents, including:
 
An update to Project Astra, previewed at the Google I/O conference.
Project Mariner, exploring future human-agent interactions.
Jules, an AI-powered coding assistant for developers.
 
Gemini 2.0 in AI Overviews
 
AI Overviews in Google Search, which provides AI-generated summaries of search topics, now incorporates Gemini 2.0. This integration enhances reasoning capabilities, allowing it to handle complex topics, advanced mathematics, multimodal queries, and coding.
Deep Research
 
Deep Research is a new AI mode running on the Gemini 1.5 model. It acts as a research assistant, exploring complex topics on behalf of users.
 
Upon receiving a prompt, Deep Research creates a multi-step research plan requiring user approval. Users can edit the plan, after which the AI analyses information from the web. It refines its findings through iterative searches and generates a comprehensive report with links to original sources.
 
The feature is rolling out to Gemini Advanced subscribers on the web version and will be available on the Gemini mobile app in early 2025.

Also Read

Topics :Google's AIGemini AIartifical intelligenceTechnology

First Published: Dec 12 2024 | 12:05 PM IST

Next Story