On Tuesday, Google unveiled the latest iteration of its artificial intelligence model, Gemini, stressing that the enhanced features will be made available promptly across various revenue-generating products, including its search engine.
Google Gemini 3, which comes 11 months following the release of the second generation of the model, seems to position Google at the leading edge of multimodal AI competition. In a press briefing, company executives underscored Gemini 3’s top ranking on multiple well-known industry leaderboards that evaluate the performance of AI models.
CEO Sundar Pichai referred to it as our most intelligent model in a company blog post.
Nevertheless, the focus of the AI race has progressively shifted from benchmarks to profitable applications of the technology, as Wall Street remains vigilant for signs of an AI bubble. This year, Alphabet’s stock has been primarily supported by the financial success derived from AI products in its cloud computing sector.
However, even with prominent developers such as GA, Google, OpenAI, and Anthropic involved, recent updates to AI models have struggled to set themselves apart, garnering attention primarily when they encounter failures, as was the case with (META.O) earlier this year.
Enhanced Reasoning Capabilities
A comparison between Gemini 3 vs. earlier versions reveals that the latest update surpasses 2.5 Pro across all significant AI benchmarks. Gemini 3 features allow it to dominate the LM Arena leaderboard with an ALF score of 1501 and exhibit PhD-level reasoning, achieving top scores on Humanity’s Last Exam and GPQA Diamond.
In addition to text, Gemini 3 advances multimodal reasoning with scores of 81% on MMMU Pro and 87.6% on video MMMU. Furthermore, it achieves a cutting-edge 72.1% on QA Verified, indicating advancements in factual reliability.
According to Google, Gemini 3 produces more intelligent, succinct, and direct responses that provide genuine insight.
When combined with code-driven, high-fidelity visualizations, Google Search AI mode can simultaneously translate intricate scientific concepts and develop interactive learning tools.
How Gemini 3 changes Google Search
In conjunction with Gemini 3 Pro, Google is unveiling Gemini 3 Deep Think, a refined reasoning mode that demonstrates even more robust performance.
During testing, it achieved 41% on Humanities, 93.8% on GPQA (Diamond), and an unprecedented 45.1% on ARC-AGI-2 with Code Execution.
Google asserts that this mode enables the model to address more innovative and intricate challenges than ever before.
Each iteration of Gemini has built upon its predecessor, allowing you to accomplish more, Pichai stated.
The Gemini app redesign, with its context, extended capabilities, multilingual advantages, and multimodal synthesis, now facilitates learning workflows such as analyzing video lectures, translating handwritten recipes, coding for interactive flashcards, and dissecting pickleball match footage.
On the development front, Google is introducing Google Antigravity, an agent-first development platform built on Google 3’s reasoning and tooling capabilities.
In addition, Gemini 3 excels in the WebDev arena, scoring 54.2% on Terminal Bench 2.0 and establishing a new benchmark on SWE Bench verified at 76.2%.
Pichai remarked that Gemini 3 integrates all of Gemini’s functionalities so you can realize any concept. He further noted that Gemini 3 is cutting-edge in its reasoning and superior at understanding user intent, making it more intuitive and beneficial.
Google today announced that Gemini 3 is launching across the Gemini app to explain how Gemini 3 changes Google Search, integrating AI Mode into Search, AI Studio, Gemini CLI, Vertex AI, and Anti-Gravity. The company anticipates additional models in the Gemini 3 series to follow soon.
Google Gemini 3 — New features explained
Improvements to Gemini 3 in areas such as coding and reasoning have enabled Google to develop a range of new features for both consumers and enterprise clients to continue the impact of Gemini 3 on digital marketers.
The company introduced Gemini Agent, a feature capable of executing multi-step tasks, including organizing AI users’ inboxes or arranging travel plans. This tool aligns with AI chief Demis Hassabis’s vision of a universal assistant internally known as AlphaAssist, as previously reported by Reuters.
Additionally, the redesigned Gemini app, developed by Google, provides answers that resemble those of a comprehensive website, posing a significant challenge to content publishers who depend on web traffic for revenue.
Josh Woodward, the Vice President overseeing the app, showcased to reporters how Gemini can now handle a request such as “create a Van Gogh gallery with live context for each piece” by producing an on-demand interface featuring visual and interactive components.
Google highlighted that Gemini 3, unlike previous versions, already supported several revenue-generating consumer and enterprise products at launch.
He believes Gemini has established a significantly faster pace for both the release of models and their accountability to users than ever before, stated Koray Kavukcuoglu, Google’s chief AI architect, during the press briefing.
Pichai noted that the launch of Gemini 3 marked the first time Google integrated its new model into its search engine from the very beginning. Historically, new Gemini iterations required weeks or even months to be incorporated into Google’s most widely used products. -Subscribers to Google’s premium AI subscription service will gain access to Gemini 3 features in AI mode, a search functionality that replaces conventional web results with computer-generated responses for complex inquiries.
For corporate clients, Google has introduced a new product, Anti-Gravity, a software development platform that enables AI agents to autonomously plan and execute coding tasks.










