50 Sources
50 Sources
[1]
Google unveils Gemini 3 AI model and AI-first IDE called Antigravity
Google has kicked its Gemini rollout into high gear over the past year, releasing the much-improved Gemini 2.5 family and cramming various flavors of the model into Search, Gmail, and just about everything else the company makes. Now, Google's increasingly unavoidable AI is getting an upgrade. Gemini 3 Pro is available in a limited form today, featuring more immersive, visual outputs and fewer lies, Google says. The company also says Gemini 3 sets a new high-water mark for vibe coding, and Google is announcing a new AI-first integrated development environment (IDE) called Antigravity, which is also available today. Google says the release of Gemini 3 is yet another step toward artificial general intelligence (AGI). The new version of Google's flagship AI model has expanded simulated reasoning abilities and shows improved understanding of text, images, and video. So far, testers like it -- Google's latest LLM is once again atop the LMArena leaderboard with an ELO score of 1,501, besting Gemini 2.5 Pro by 50 points. Factuality has been a problem for all gen AI models, but Google says Gemini 3 is a big step in the right direction, and there are myriad benchmarks to tell the story. In the 1,000-question SimpleQA Verified test, Gemini 3 scored a record 72.1 percent. Yes, that means the state-of-the-art LLM still screws up almost 30 percent of general knowledge questions, but Google says this still shows substantial progress. On the much more difficult Humanity's Last Exam, which tests PhD-level knowledge and reasoning, Gemini set another record, scoring 37.5 percent without tool use. Math and coding are also a focus of Gemini 3. The model set new records in MathArena Apex (23.4 percent) and WebDev Arena (1487 ELO). In the SWE-bench Verified, which tests a model's ability to generate code, Gemini 3 hit an impressive 76.2 percent.
[2]
Google launches Gemini 3 with new coding app and record benchmark scores | TechCrunch
On Tuesday, Google released Gemini 3, its latest and most advanced foundation model, which is now immediately available through the Gemini app and AI search interface. Coming just seven months after the Gemini 2.5 release, the new model is Google's most capable LLM yet, and an immediate contender for the most capable AI tool on the market. The release also comes less than a week after OpenAI released GPT 5.1, and a mere two months after Anthropic released Sonnet 4.5 -- a reminder of the blistering pace of frontier model development. A more research-intensive version of the model, called Gemini 3 Deepthink, will also be made available to Google AI Ultra subscribers in the coming weeks, once it passes further rounds of safety testing. "With Gemini 3, we're seeing this massive jump in reasoning," said Tulsee Doshi, Google's head of product for the Gemini model. "It's responding with a level of depth and nuance that we haven't seen before." Some of that reasoning power is already registering on independent benchmarks. With a score of 37.4, the model marked the highest score on record on the Humanity's Last Exam benchmark, meant to capture general reasoning and expertise. The previous high score, held by GPT-5 Pro, was 31.64. Gemini 3 also topped the leaderboard on LMArena, a human-led benchmark that measures user satisfaction. According to Google, the Gemini app currently has more than 650 million monthly active users, and 13 million software developers have used the model as part of their workflow. Alongside the base model, Google also released a Gemini-powered coding interface called Google Antigravity, allowing for multi-pane agentic coding similar to agentic IDEs like Warp or Cursor 2.0. Specifically, Antigravity combines a ChatGPT-style prompt window with a command-line interface and a browser window that can show the impact of the changes made by the coding agent. "The agent can work with your editor, across your terminal, across your browser to make sure that it helps you build that application in the best way possible," said DeepMind CTO Koray Kavukcuoglu.
[3]
Google's new Gemini 3 vibe-codes its responses and comes with its own agent
When asked to explain a concept, Gemini 3 may sketch a diagram or generate a simple animation on its own if it believes a visual is more effective. "Visual layout generates an immersive, magazine-style view complete with photos and modules," says Josh Woodward, VP of Google Labs, Gemini, and AI Studio. "These elements don't just look good but invite your input to further tailor the results." With Gemini 3, Google is also introducing Gemini Agent, an experimental feature designed to handle multi-step tasks directly inside the app. The agent can connect to services such as Google Calendar, Gmail, and Reminders. Once granted access, it can execute tasks like organizing an inbox or managing schedules. Similar to other agents, it breaks tasks into discrete steps, displays its progress in real time, and pauses for approval from the user before continuing. Google describes the feature as a step toward "a true generalist agent." It will be available on the web for Google AI Ultra subscribers in the US starting November 18. The overall approach can seem a lot like "vibe coding," where users describe an end goal in plain language and let the model assemble the interface or code needed to get there. The update also ties Gemini more deeply into Google's existing products. In Search, a limited group of Google AI Pro and Ultra subscribers can now switch to Gemini 3 Pro, the reasoning variation of the new model, to receive deeper, more thorough AI-generated summaries that rely on the model's reasoning rather than the existing AI Mode.
[4]
Gemini 3 Is Here -- and Google Says It Will Make Search Smarter
Google has introduced Gemini 3, its smartest artificial intelligence model to date, with cutting-edge reasoning, multimedia, and coding skills. As talk of an AI bubble grows, the company is keen to stress that its latest release is more than just a clever model and chatbot -- it's a way of improving Google's existing products, including its lucrative search business, starting today. "We are the engine room of Google, and we're plugging in AI everywhere now," Demis Hassabis, CEO of Google DeepMind, an AI-focused subsidiary of Google's parent company, Alphabet, told WIRED in an interview ahead of the announcement. Hassabis admits that the AI market appears inflated, with a number of unproven startups receiving multibillion-dollar valuations. Google and other AI firms are also investing billions in building out new data centers to train and run AI models, sparking fears of a potential crash. But even if the AI bubble bursts, Hassabis thinks Google is insulated. The company is already using AI to enhance products like Google Maps, Gmail, and Search. "In the downside scenario, we will lean more on that," Hassabis says. "In the upside scenario, I think we've got the broadest portfolio and the most pioneering research." Google is also using AI to build popular new tools like NotebookLM, which can auto-generate podcasts from written materials, and AI Studio which can prototype applications with AI. It's even exploring embedding the technology into areas like gaming and robotics, which Hassabis says could pay huge dividends in years to come, regardless of what happens in the wider market. Google is making Gemini 3 available today through the Gemini app and in AI Overviews, a Google Search feature that synthesizes information alongside regular search results. In demos, the company showed that some Google queries, like a request for information about the three-body problem in physics, will prompt Gemini 3 to automatically generate a custom interactive visualization on the fly. Robby Stein, vice president of product for Google Search, said at a briefing ahead of the launch that the company has seen "double-digit" increases in queries phrased in natural language, which are most likely targeted at AI Overviews, year over year. The company has also seen a 70 percent spike in visual search, which relies on Gemini's ability to analyze photos. Despite investing heavily in AI and making key breakthroughs, including inventing the transformer model that powers most large language models, Google was shaken by the sudden rise of ChatGPT in 2022. The chatbot not only vaulted OpenAI to center stage when it came to AI research; it also challenged Google's core business by offering a new and potentially easier way to search the web.
[5]
Google Says New Gemini 3 AI Model Is Its Most Capable Yet
Imad is a senior reporter covering Google and internet culture. Hailing from Texas, Imad started his journalism career in 2013 and has amassed bylines with The New York Times, The Washington Post, ESPN, Tom's Guide and Wired, among others. Gemini 3, the latest AI model from Google, is the company's most intelligent model to date, with more advanced multimodal and vibe coding capabilities, the company said in a blog post on Tuesday. It's available now. Google says Gemini 3 is "built to grasp depth and nuance" and is better at understanding the intent behind a user's request. The company also touted Gemini 3's multimodal capabilities, such as its ability to turn a long video lecture into interactive flash cards or to analyze a person's pickleball match and find areas for improvement. Gemini 3 isn't limited to the app. It'll also be available in AI Mode in Search and, for Pro and Ultra subscribers, in AI Overviews. In AI Overviews, Gemini 3 can generate interactive elements. Google DeepMind's Demis Hassabis and Koray Kavukcuoglu said in a blog post that Gemini 3 Pro is less sycophantic, a problem that's been plaguing AIs and leading to AI psychosis in some. It's also more secure against prompt injection attacks, a type of attack in which bad actors try to make an AI ignore its original instructions and perform unintended actions. The company also unveiled Google Antigravity, a new agentic development platform. Google says Antigravity is like having an active partner while making tools or working on projects, autonomously planning and executing complex software tasks while validating its own code. It works in tandem with Gemini 2.5's Computer Use model for browser control and works with nano banana, Gemini 2.5's image model. Gemini 3's new, more powerful, agentic capabilities will only be available to $250/month Google AI Ultra subscribers at first. This will allow the Gemini Agent to do multi-step workflows, like planning a travel itinerary. Google's release of Gemini 3 comes as an AI war is heating up between it, OpenAI, Anthropic and xAI. Google's consistently been leading AI leaderboards, although other AI models haven't been far behind, sometimes trading spots at the top. With the release of Gemini 3, Google seems to be trying to solve for some of AI's more annoying problems, like hallucinations or sycophancy. It's also trying to prove that AIs can be truly agentic, being able to accomplish tasks on the user's behalf. Other agentic models have proven to be problematic in real-world usage and run into various security concerns, especially in web browsers. The latest AI release from Google also comes at a time when there are fears of an AI bubble forming in the stock market. AI companies, including Nvidia, Google, Meta and Microsoft, account for 30% of the S&P 500. Google currently has a valuation of $3.4 trillion. Even Google CEO Sundar Pichai says the trillion-dollar AI investment boom has "elements of irrationality" and that a burst would affect every AI company, in an interview with the BBC. Still, if Google is to keep its stock price moving upward, it needs to demonstrate that its AI models beat the competition.
[6]
Google is launching Gemini 3, its 'most intelligent' AI model yet
Google is beginning to launch Gemini 3 today, a new series of models the company says is its "most intelligent" and "factually accurate" AI systems yet. They're also a chance for Google to leap ahead of OpenAI following the rocky launch of GPT-5, potentially putting the company at the forefront of consumer-focused AI models. For the first time, Google is giving everyone access to its new flagship AI model -- Gemini 3 Pro -- in the Gemini app on day one. It's also rolling out Gemini 3 Pro to subscribers inside Search. Tulsee Doshi, Google DeepMind's senior director and head of product, says the new model will bring the company closer to making information "universally accessible and useful" as its search engine continues to evolve. "I think the one really big step in that direction is to step out of the paradigm of just text responses and to give you a much richer, more complete view of what you can actually see." Gemini 3 Pro is "natively multimodal," meaning it can process text, images, and audio all at once, rather than handling them separately. As an example, Google says Gemini 3 Pro could be used to translate photos of recipes and then transform them into a cookbook, or it could create interactive flashcards based on a series of video lectures. You'll spot some of these improvements across Google's suite of products, including the Gemini app, where you can build more "full-featured" programs inside the built-in workspace, Canvas. The upgraded AI model will also enable "generative interfaces," a tool Google is testing in Gemini Labs that allows Gemini 3 Pro to create a visual, magazine-style format with pictures you can browse through, or a dynamic layout with a custom user interface tailored to your prompt. Gemini 3 Pro in AI Mode -- the AI-powered Google Search feature -- will similarly present you with visual elements, like images, tables, grids, and simulations based on your query. It's also capable of performing more searches using an upgraded version of Google's "query fan-out technique," which now not only breaks down questions into bits it can search for on your behalf, but is better at understanding intent to help "find new content that it may have previously missed," according to Google's announcement. Google is also not so subtly jabbing at OpenAI, describing Gemini 3 Pro as less prone to the type of empty flattery espoused by ChatGPT. Doshi says you'll see "noticeable" changes to Gemini 3 Pro's responses, which Google describes as offering a "smart, concise and direct, trading cliche and flattery for genuine insight -- telling you what you need to hear, not just what you want to hear." The company says it also shows "reduced sycophany," an issue OpenAI had to address with ChatGPT earlier this year. Along with these improvements, Gemini 3 Pro comes with better reasoning and agentic capabilities, allowing it to complete more complex tasks and "reliably plan ahead over longer horizons," according to Google. The AI model is powering an experimental Gemini Agent feature that can perform tasks on your behalf inside the Gemini app, such as reviewing and organizing emails, or researching and booking travel. Gemini 3 Pro now sits at the top of LMArena's leaderboard, a popular platform used for benchmarking AI models. A Deep Think mode enhances the model's reasoning capabilities even further, though it's currently only available to safety testers. Gemini 3 Pro is available inside the Gemini app for everyone starting today, while Google AI Pro and Ultra subscribers in the US can try out Gemini Agent in the Gemini app, along with Gemini 3 Pro inside AI Mode by selecting "Thinking" from the model dropdown.
[7]
Google just rolled out Gemini 3 to Search - here's what it can do and how to try it
The new Search is available to Google AI and Ultra subscribers. Google just made the biggest change to its search engine since the company debuted AI Overviews in it last year. The company officially debuted Gemini 3, its latest AI model, on Tuesday morning, and has already integrated it with Search, which Google says will enable deeper contextual awareness, more sophisticated reasoning capabilities, and multimedia responses to help users unlock more useful information from the web. Also: How to get rid of AI Overviews in Google Search: 4 easy ways This marks the first time that a new AI model from Google has been fused with its search engine from the jump, which signals a growing level of confidence from the company as it races against OpenAI, Microsoft, Meta, and Amazon -- both to deploy new models and make them accessible through existing consumer-facing tools. The newly upgraded Google search is designed to simultaneously optimize for both range and specificity: it covers a wider portion of the web to search for all relevant results to a given query, and is also engineered to read between the lines of that query, as it were, to get a clear sense of the user's true intent. "Gemini 3 brings incredible reasoning power to Search because it's built to grasp unprecedented depth for your hardest questions," Elizabeth Reid, VP and Head of Search at Google, wrote in a company blog post. Also: I tried Google's new trip-planning AI tool, and I'll never plan my own trip again Google has growing competition in this regard. AI startups like OpenAI and Perplexity have launched their own AI-powered web browsers with an eye toward stealing a slice of the pie that's long been hoarded by Google with its undisputed dominance over online search. But Google has the obvious advantage: that millions of people already rely on its browser and search engine, which means that it can easily embed its new AI model into those people's daily online routines, just as it did -- like it or not -- with AI Overviews. Reid added in the blog post that, in the coming weeks, Google will also update its automatic model selection feature in Search, so that the most challenging queries automatically get funneled to Gemini 3 while older and faster models tackle easier tasks. Google Search is also getting a multimodal upgrade thanks to the newly released Gemini 3. Also: I found an open-source NotebookLM alternative that's powerful, private - and free Rather than just responding with text, web links, and images, Gemini 3 in AI Mode can automatically generate visual aids to help users gain a more thorough understanding of the information they're seeking. "When the model detects that an interactive tool will help you better understand the topic, it uses its generative capabilities to code a custom simulation or tool in real-time and adds it into your response," Reid wrote in the blog post. If you're trying to wrap your mind around the infamous double-slit experiment, for example -- which is foundational to quantum dynamics and shows that subatomic particles can act as both particles and waves -- the newly upgraded, multimodal Google search might provide you with an interactive simulation so that, rather than just reading about the experiment, you can directly engage with it. Welcome news, no doubt, to the many people out there who consider themselves visual and/or hands-on learners. Gemini 3 Pro, the first of the new family of Gemini 3 family of models, is available now for Google AI Pro and Ultra subscribers. To take the new model's search capabilities for a spin, just select "Thinking" from the drop-down menu in AI Mode. The company plans to release the new model via AI mode to all US users soon, with higher limits for Google AI Pro and Ultra subscribers.
[8]
ChatGPT Who? Google Releases Gemini 3, Now in AI Mode
Days after OpenAI released GPT 5.1, Google is introducing its own, potentially more powerful AI model with Gemini 3, which starts rolling out to users today. The company says Gemini 3 is Google's most intelligent AI model. It hyped up Gemini 2.5 with the same language back in March, but this time, Google is confident enough to release Gemini 3 immediately to everyone. "This is the first time we are shipping Gemini in Search on day one," says CEO Sundar Pichai. Gemini 3 is rolling out via AI Mode, Google's ChatGPT-like interface on the Google search engine. One standout feature is how Gemini 3 can display "immersive visual layouts and interactive tools and simulations, all generated completely on the fly based on your query." The new model is also launching for all users in the Gemini app. And Google promises a boost for regular searches, too. For each query, the company plans to use Gemini 3 to perform "even more searches to uncover relevant web content" that may have been missed previously. In addition, the company is signaling that Gemini 3 will eventually power AI Overviews, the Google Search function that automatically summarizes the answer to your query (for better or worse) at the top of the results page. "In the coming weeks, we're also enhancing our automatic model selection in Search with Gemini 3. This means Search will intelligently route your most challenging questions in AI Mode and AI Overviews to this frontier model -- while continuing to use faster models for simpler tasks," the company wrote in a separate blog post. That said, the Gemini 3 integration with AI Overviews will initially be limited to paid subscribers of Google's AI plans. In the US, they can also use a more powerful Gemini 3 Pro model starting today via AI Mode by selecting the "Thinking" option from the model drop-down menu. According to Google, the Gemini 3 model outperforms GPT 5.1 and Anthropic's Claude Sonnet 4.5 across a wide range of AI-related benchmarks, including math, scientific reasoning, and multilingual questions and answers. "It's state-of-the-art in reasoning, built to grasp depth and nuance -- whether it's perceiving the subtle clues in a creative idea, or peeling apart the overlapping layers of a difficult problem," Pichai added. Ironically, though, the same benchmarks also indicate Gemini 3 struggles with "visual reasoning puzzles" and "academic reasoning." However, the Google model still beats the competition in these tests. The company also says Gemini 3 has been built to resist "sycophancy" and "prompt injection" attacks that can manipulate the AI into executing malicious instructions. To attract software developers, the company also announced "Google Antigravity," a suite of AI-powered tools for computer programming, an apparent response to rival coding programs from OpenAI and Anthropic. "Using Gemini 3's advanced reasoning, tool use, and agentic coding capabilities, Google Antigravity transforms AI assistance from a tool in a developer's toolkit into an active partner," Google's CEO said. "Now, agents can autonomously plan and execute complex, end-to-end software tasks simultaneously on your behalf while validating their own code." In addition, Google has developed an even smarter "Gemini 3 Deep Think mode." But the company wants to be careful with its release. "We're taking extra time for safety evaluations and input from safety testers before making it available to Google AI Ultra subscribers in the coming weeks," the company explained.
[9]
Google unveils Gemini 3 AI model, Antigravity agentic development tool
Gemini 3 excels at coding, agentic workflows, and complex zero-shot tasks, while Antigravity shifts AI-assisted coding from agents embedded within tools to an AI agent as the primary interface, Google said. Google has introduced the Gemini 3 AI model, an update of Gemini with improved visual reasoning, and Google Antigravity, an agentic development platform for AI-assisted software development. Both were announced on November 18. Gemini 3 is positioned as offering reasoning capabilities with robust function calling and instruction adherence to build sophisticated agents. Agentic capabilities in Gemini 3 Pro are integrated into agent experiences in tools such as Google AI Studio, Gemini CLI, Android Studio, and third-party tools. Reasoning and multimodal generation enable developers to go from concept to a working app, making Gemini 3 Pro suitable for developers at any experience level, Google said. Gemini 3 surpasses Gemini 2.5 Pro at coding, mastering both agentic workflows and complex zero-shot tasks, according to the company. Gemini 3 is available in preview at $2 per million input tokens and $12 per million output tokens for prompts of 200K tokens or less through the Gemini API in Google AI Studio and Vertex AI for enterprises. Google Antigravity, meanwhile, shifts from agents embedded within tools to an AI agent as the primary interface. It manages surfaces such as the browser, editor, and the terminal to complete development tasks. While the core is a familiar AI-powered IDE experience, Antigravity is evolving the IDE toward an agent-first future with browser control capabilities, asynchronous interaction patterns, and an agent-first product form factor, according to Google. Together, these enable agents to autonomously plan and execute complex, end-to-end software tasks. Available in public preview, Antigravity is compatible with Windows, Linux, and macOS.
[10]
Google's Gemini 3 is finally here and it's smarter, faster, and free to access
The model is faster, smarter, and better at coding and multimodality. With its Gemini AI models, Google has integrated its AI assistant across several of its most popular platforms, including Search, Google Workspace, and Android devices. Now, the company is releasing its latest and greatest model to upgrade user experiences throughout its ecosystem. On Tuesday, Google finally launched Gemini 3, which the company claims is the "best model in the world for multimodal understanding and our most powerful agentic and vibe coding model yet." The claim is supported by benchmark data, crowd-sourced Arena results, and more advanced use cases that the chatbot has not previously been able to tackle. Also: Let Gemini make your next slide presentation for you - here's how The best part? Everyone can try it across Google's suite of tools, including Search, the Gemini App, AI Studio, and Vertex AI, as well as a brand-new agentic development platform, Google Antigravity. To learn about what's new and what you can expect, keep reading below. Google is kicking off the Gemini 3 era with the launch of Gemini 3 Pro. Gemini 3 Pro outperforms its predecessor, Gemini 2.5 Pro, across all major AI benchmarks -- but before I get into the data, here's what it practically means for you. Gemini 3 Pro has been designed to provide users with better quality answers that have a new "level of depth and nuance," according to Google. This includes more smart, concise, and direct answers that focus on helpfulness rather than flattery. Of course, as the benchmarks support, the model is designed to be smarter and reason more effectively, demonstrating improved performance in Ph.D.-level reasoning that enables it to better tackle complex problems, such as those in science and mathematics. Also: How to turn off Gemini in your Gmail, Photos, Chrome, and more - it's easy to opt out of AI Google said the model also pushes the frontiers of multimodal reasoning by being able to synthesize information about a topic across multiple modalities, including text and images, video, audio, and code. For everyday help, this means you can input information in more ways, have the model understand, and output responses in whichever multimodal medium is best fit for your response. Lastly, for developers, the model reflects a major win, as it's the best vibe coding and agentic coding model Google has released, climbing to the top of the WebDev Arena, a leaderboard compiled through user votes after comparing the anonymous models' ability to build a site and voting for the best one. Developers can leverage coding capabilities through Google AI Studio, Vertex AI, Gemini CLI, and third-party platforms such as Cursor, GitHub, JetBrains, and others. The company also launched Google Antigravity, a brand-new agent development platform that functions as an active coding partner, allowing agents to plan and execute complex, end-to-end software tasks and even validate their own code. Also: Gemini is gaining fast on ChatGPT in one particular way, according to new data The agentic capabilities don't end there, with Google sharing that it will be able to take action on your behalf with more complex workflows, such as reorganizing your Gmail inbox. However, these more advanced capabilities are made accessible through the Gemini Agent in the Gemini App for Google AI Ultra subscribers. Gemini 3 Pro skyrocketed to the top of the LMArena Leaderboard with a score of 1,501 points, a noteworthy feat as that score is compiled through anonymous voting against top models from Anthropic, Meta, XAI, DeepSeek, and more. Reflecting its advanced reasoning, the model scored a 91.9% on the GPQA Diamond, a benchmark used to evaluate Ph.D.-level reasoning, a "Top-score" on Humanity's Last Exam (37.5% without usage of tools), a multimodal exam that tests a variety of fields, and an 81% on the MMMU-Pro and 87.6% on the Video MMMU, benchmarks used to test multimodal reasoning. Also: Gemini for Home is finally rolling out for early access - here's how to try it first The advanced reasoning capabilities also translate to higher scores on Math benchmarks, such as the Math-Arena-Apex, where the model scored a state-of-the-art 23.4%, according to Google. The company also launched Gemini 3 Deep Think, which has deeper intelligence, reasoning, and understanding capabilities. The Deep Think version outperformed Gemini 3 on Humanity's Last Exam (41% without using tools) and GPQA Diamond (93.8%). Everyone can access Gemini 3 in the Gemini App. To access Gemini 3 in AI Mode in Search, users must have a Google AI Pro or Ultra subscription. Developers can access it in the Gemini API in AI Studio, Antigravity, and Gemini CLI. Also: I let Gemini Deep Research dig through my Gmail and Drive - here's what it uncovered Lastly, enterprises can access it in Vertex AI in Gemini Enterprise.
[11]
Google announces Gemini 3 as battle with OpenAI intensifies
Google is debuting its latest artificial intelligence model, Gemini 3, as the search giant races to keep pace with ChatGPT creator OpenAI. The new AI model will allow users to get better answers to more complex questions, "so you get what you need with less prompting," Alphabet CEO Sundar Pichai said in one of several blog posts Google published Tuesday. Gemini 3 will be integrated into the Gemini app, Google's AI search products AI Mode and AI Overviews, as well as its enterprise products. The rollout begins Tuesday for select subscribers and will go out more broadly in the coming weeks. The announcement comes about eight months after Google introduced Gemini 2.5 and 11 months after Gemini 2.0. OpenAI, which kicked off the generative AI boom in late 2022 with the public launch of ChatGPT, introduced GPT-5 in August. "It's amazing to think that in just two years, Al has evolved from simply reading text and images to reading the room," Pichai wrote in one of Tuesday's posts. "Starting today, we're shipping Gemini at the scale of Google." The Gemini app now has 650 million monthly active users and AI Overviews has 2 billion monthly users, the company said. OpenAI said in August that ChatGPT hit 700 million weekly users. Pichai added that the newest model is "built to grasp depth and nuance," and said Gemini 3 is also "much better at figuring out the context and intent behind your request, so you get what you need with less prompting." Google's other AI models may still be used for simpler tasks, the company said. Alphabet and its megacap rivals are spending heavily to build out the infrastructure for AI development and to rapidly create more services for consumers and businesses. In their earnings reports last month, Alphabet, Meta, Microsoft and Amazon each lifted their guidance for capital expenditures, and collectively expect that number to reach more than $380 billion this year. Google said AI responses powered by Gemini 3 will be "trading cliché and flattery for genuine insight -- telling you what you need to hear, not what you want to hear," according to a statement from Demis Hassabis, CEO of Google's AI unit DeepMind. Industry critics have said today's AI chatbots are too sycophantic. Last week, OpenAI issued two updates to GPT-5. One is "warmer, more intelligent, and better at following your instructions," the company said, and the other is "faster on simple tasks, more persistent on complex ones."
[12]
Google's new Gemini 3 model arrives in AI Mode and the Gemini app
A few weeks short of , Google has . Naturally, the company claims the new system is its most intelligent AI model yet, offering state-of-the-art reasoning, class-leading vibe coding performance and more. The good news is you can put those claims to the test today, with Google making Gemini 3 Pro available across many of its products and services. Google is highlighting a couple of benchmarks to tout Gemini 3 Pro's performance. In , widely considered one of the toughest tests AI labs can put their systems through, the model delivered a new top accuracy score of 37.5 percent, beating the previous leader, Grok 4, by an impressive 12.1 percentage points. Notably, it achieved its score without turning to tools like web search. On , meanwhile, Gemini 3 Pro is now on top of the site's leaderboards with a score of 1,501 points. Okay, but what about the practical benefits of Gemini 3 Pro? In the , the new model will translate to answers that are more concise and better formatted. It also enables a new feature Google calls Gemini Agent. The tool builds on , the web-surfing Chrome AI the company debuted at the end of last year. It allows users to ask Gemini to complete tasks for them. For example, say you want help managing your email inbox. In the past, Gemini would have offered some general tips. Now, it can do that work for you. To try Gemini 3 Pro inside of the Gemini app, select "Thinking" from the model picker. The new model is available to everyone, though AI Plus, Pro and Ultra subscribers can use it more often before hitting their rate limit. To make the most of Gemini Agent, you'll need to grant the tool access to your Google apps. In Search, meanwhile, Gemini 3 Pro will debut inside of , with availability of the new model first rolling out to AI Pro and Ultra subscribers. Google will also bring the model to AI Overviews, where it will be used to answer the most difficult questions people ask of its search engine. In the coming weeks, Google plans to roll out a new routing algorithm for both AI Mode and AI Overviews that will know when to put questions through Gemini 3 Pro. In the meantime, subscribers can try the new model inside of AI Mode by selecting "Thinking" from the dropdown menu. In practice, Google says Gemini 3 Pro will result in AI Mode finding more credible and relevant content related to your questions. This is thanks to how the new model augments the fan-out technique that powers AI Mode. The tool will perform even more searches than before and with its new intelligence, Google suggests it may even uncover content previous models may have missed. At the same time, Gemini 3's better multi-modal understanding will translate to AI Mode generating more dynamic and interactive interfaces to answer your questions. For example, if you're researching mortgage loans, the tool can create a loan calculator directly inside of its response. For developers and its enterprise customers, Google is bringing Gemini 3 to all the usual places one can find its models, including inside of the Gemini API, AI Studio and Vertex AI. The company is also releasing a new agentic coding app called Antigravity. It can autonomously program while creating tasks for itself and providing progress reports. Alongside Gemini 3 Pro, Google is introducing Gemini 3 Deep Think. The enhanced reasoning mode will be available to safety testers before it rolls out to AI Ultra subscribers.
[13]
Gemini 3 is here: Google's most advanced model promises better reasoning, coding, and more
Gemini 3 is Google's most intelligent AI model to date, promising improvements in reasoning, coding, and multimodal capabilities. Users will be able to ask even more difficult and puzzling questions to Gemini (including logic puzzles and math problems), and find that it aces them with ease. The AI model will also be able to execute even more complex coding tasks with this update. Furthermore, it also promises to excel in multimodal analysis, enabling users to combine media, text, and other formats within the same prompt for a seamless, all-around experience. All of this comes with a better grasp of the context and intent behind your prompt.
[14]
Google Launches Gemini 3 Pro to Usher in a 'New Era of Intelligence'
Google announced on Tuesday the release of Gemini 3, the latest version of its flagship large language model, which it calls its smartest model yet, with a whole lot of benchmarks to back it up. The company said that Gemini 3 Pro will be available to users in preview across several Google products starting today, including being deployed in Search, and the "enhanced reasoning model" Gemini 3 Deep Think will start rolling out to Google AI Ultra subscribers after it completes safety testing. If you follow any AI-obsessed types on social media, you know Gemini 3 has been wildly anticipated by that bunch, who kept whispering about how the model would be a game-changer. Time will tell if it actually achieves that, but Google has a lot of metrics to show how the model is better on paper than previous iterations. The company bragged that it now tops the leaderboard of LMArena, a benchmarking tool used to compare LLMs, dethroning Grok 4.1 Thinking. Google also claims that Gemini 3 Pro demonstrates "PhD-level reasoning" on Humanity’s Last Exam and GPQA Diamond, and set a new record for mathematics performance on MathArena Apex. The scores, of course, climb even higher with Gemini 3 Deep Think. Does it matter that AI benchmarks are considered unreliable and misleading? Not if you're on the hype train, it doesn't. So Gemini 3 is smarter, which tends to be the primary upgrade that new LLM models bring. But Google is introducing some new capabilities with this model, too. The company announced Google Antigravity, an integrated development environment (IDE) for coding that has a built-in AI agent for maximizing your vibe coding. Basically, it's a way to more easily hand off coding tasks to AI, which can work across the editor, terminal, and browser based on the user's commands. While Gemini 3 Pro will be the default, the company said Antigravity also supports Claude Sonnet 4.5 and GPT-OSS agents. Antigravity will be available for testing starting today on Windows, Mac, and Linux. In addition to the newly launched Antigravity, developers will be able to access Gemini 3 Pro in Google's Vertex AI and AI Studio. The company also claims Gemini 3's coding benchmarks are world beaters, tooâ€"topping the WebDev Arena leaderboard and setting a new high on Terminal-Bench 2.0. Again, your mileage may vary as to whether those mean anything to you or if they're just numbers on a test. The big thing about Gemini 3 is that you're just not going to be able to avoid it. For the first time, the company is immediately integrating the model into Search. So any time you get an answer from AI Mode, it'll have run through Gemini 3. The Gemini app is where the model will primarily live, of course, and users will find what Google calls a “generative interface,†which offers two output modes called visual layout and dynamic view. The visual layout is a more traditional experience, pulling in images to the output based on the user's prompt. The dynamic view sees Gemini quickly whip up a website-style interface with functioning buttons and let you interact with different pages of information. Google's Gemini 3 is the latest model to join the game of oneupsmanship from the biggest players in the AI space. Earlier this month, OpenAI dropped its GPT-5.1 model, and on Monday, Elon Musk's xAI released Grok 4.1. It took all of 24 hours before Google grabbed the spotlight. We'll see how long the company can hold onto it before it's usurped by another model.
[15]
Google's latest AI model invades Search on day one
For the past week now, the tech (and gambling) sphere has been buzzing with anticipation about Google's latest Gemini model. The speculation surrounding Gemini 3, however, finally ends now. The tech giant, in a new keyword blog post, just made its most intelligent large language model official, paired with 'Generative Interfaces,' and an all-new 'Gemini Agent.' Gemini 3 is official There's a lot that's new with Gemini 3, but more importantly, this marks the first time Google has brought its new flagship AI model straight to Google Search (starting with AI Mode) on day one. According to the tech giant, the new model will set a new bar for AI model performance. "You'll notice the responses are more helpful, better formatted and more concise," wrote Google, adding that Gemini 3 is the best model in the world for multimodal tasks. This means that for tasks like analyzing photos, reasoning, document analysis, transcribing lecture notes, and more, you'll notice better performance from Gemini 3 than its predecessors (and potentially even competitors). On paper, Gemini 3 Pro boasts a score of 1501 on LMArena, ranking higher than Gemini 2.5 Pro's 1451 score. Google AI Pro and Ultra subscribers in the US can start experimenting with Gemini 3 Pro starting today. To do so, head to Google Search > AI Mode > select 'Thinking' from the model drop-down. The model will expand to everyone in the US "soon," with AI Pro and Ultra plan holders retaining higher usage limits. Generative interfaces end Gemini's static UI Think of generative interface as dynamic prompt-based UIs that change depending on your specific requests. The new feature is powered by two experiments, namely visual layout and dynamic view. The former kicks in when manually selected. Instead of answering your queries in a plain text-based format, visual layout triggers an immersive, magazine-style view, complete with photos and module. For reference, prompts like "plan a 3-day trip to Rome next summer" will highlight a visual itinerary -- something like this: Dynamic view, on the other hand, changes the entire Gemini user interface. Leveraging Gemini 3's agentic coding capabilities, the feature essentially designs and codes a custom UI in real-time. The UI it designs is suited to your prompt. For example, prompting something like "explain the Van Gogh Gallery with life context for each piece" will highlight something like this: Dynamic View and visual layout are rolling out now. Gemini Agent arrives Likely the most ambitious of the bunch, Gemini Agent, as Google describes it, is "an experimental feature that handles multi-step tasks directly inside Gemini." The agent, which can connect to your Google apps like Calendar, reminders, and more, can do a lot. For example, you can simply ask it to "organize my inbox," and it will go through your to-dos and even draft replies to emails for your approval. Alternatively, you can give the agent complex multi-command tasks to fulfill. Think something like "Research and help me book a mid-size SUV for my trip next week under $80/day using details from my email." The agent would locate flight information from Gmail, compare rentals within budget, and prepare the booking for you. Powered by Gemini 3, the agent, which needs to be manually selected from the Gemini app's 'tools,' can take action across other Gemini tools like Deep Research, Canvas, connected Workspace apps, live browsing, and more. Gemini Agent is available to try out starting today, but only for US-based Google AI Ultra subscribers on the web. The tech giant did not hint at the feature expanding to more users anytime soon.
[16]
Google launches Gemini 3 with major boost in AI reasoning power
Google just fired the next shot in the AI arms race, and it's a big one. The company has unveiled Gemini 3, calling it its most powerful reasoning and multimodal model yet, and positioning it as the centerpiece of a new era of agentic, deeply interactive AI. The new release marks a leap in performance across nearly every major benchmark, from mathematics to multimodal analysis. Google says Gemini 3 Pro, rolling out today in preview, is now baked across a suite of its consumer and developer products.
[17]
Gemini app rolling out Gemini 3 Pro as 'Gemini Agent' comes to AI Ultra
Google today is introducing new features for the Gemini app, led by Gemini Agent, made possible by Gemini 3 Pro. For starters, responses in the Gemini app are "more helpful, better formatted, and more concise." In Canvas, apps are "more full-featured," with Google calling Gemini 3 its "best vibe coding model ever." Gemini 3 Pro is available for all users starting today. From the model picker, select "Thinking" in a new change also shared by AI Mode. Google AI Plus, Pro, and Ultra subscribers will get higher limits. The "Labs" concept is coming to the Gemini app. Visual layout is the first experiment that creates an "immersive, magazine-style view complete with photos and modules" to answer your prompt. Gemini will show sliders, checkboxes, and other UI to further customize the result. For example, when trip planning, you might get a slider to set the pace, while filters what you select activity type. Dynamic view sees Gemini 3 design and code a "fully customized interactive response for each prompt." At launch, you might only see one of these experiments as Google gathers feedback. Gemini Agent brings what Google has learned with Project Mariner to the Gemini app. This experimental feature "handles multi-step tasks directly inside Gemini." This is made possible by Gemini 3's advanced reasoning, live web browsing, and tool use, including Canvas, Deep Research, Gmail, and Google Calendar. Gemini will have you confirm before sending emails, making purchases, and other critical actions, with users able to take over at any time. The prompt below is "organize my inbox," with the user selecting "Agent" from the Tools menu. Gemini Agent then groups together related emails with a table that lets you quickly archive and mark as read emails by tapping the checkmark. It can also create new Google Tasks reminders. Meanwhile, Google can now generate an email reply and send it directly with the Gemini app. The inline email interface lets you add recipients and change the body as needed. Another example is: "Book a mid-size SUV for my trip next week under $80/day using details from my email." Gemini will "locate your flight information, research rentals within budget and prepare the booking." This is available today for Google AI Ultra subscribers ($249.99 per month) in the US.
[18]
Gemini 3 is here -- Google's most powerful AI model yet is crushing benchmarks, improving search and outperforming ChatGPT
Google just dropped Gemini 3, and it's already shattering benchmarks. Gemini 3 Pro now sits at the top of the LMArena leaderboard with what Google calls a "breakthrough score," trouncing Gemini 2.5 Pro across every major test -- math, long-form reasoning, multimedia understanding, you name it. Oh, and yes, it translates, too. This is the smartest model Google has built, period. The good news is Google isn't wasting any time rolling it out, either. Gemini 3 is already live in Google Search, the Gemini app and a full suite of developer tools -- marking one of the company's most aggressive moves yet in the AI arms race. Starting today, Google AI Pro and Ultra subscribers in the U.S. can start using Gemini 3 Pro directly inside Search by selecting "Thinking" from the model selector. Google's reworked how Search finds information behind the scenes; a complete game-changer for the way we Google things. Gemini 3 digs deeper and genuinely understands what you're actually asking for in terms of context, not just keywords. Gemini 3's deeper reasoning allows Search to perform more background queries, uncover sources older models would have missed and better understand the intent behind your question, beyond the keywords. The result is cleaner, more accurate answers with fewer irrelevant results or hallucinations. And there's more on the way. Google says automatic model routing is coming soon, sending simple questions to lighter models and reserving Gemini 3 for the hard stuff. It is efficiency meets intelligence -- and a clear sign of where Search is headed next. This is where things get really interesting. Instead of just spitting out text answers, Gemini 3 can now generate interactive tools, visualizations and even simulations right inside your search results. Curious about the physics of the three-body problem? Gemini 3 will build you a live simulation you can actually play with. Shopping for a mortgage? It'll generate an interactive loan calculator where you can plug in different numbers and see the results instantly. The layout adapts to your question so no two searches look quite the same. Google is calling Gemini 3 its "most intelligent model" yet, and the specs back that up: Soon, Google will also roll out automatic model selection. Simple searches questions will get answered faster with lighter models, while the tough stuff gets routed to Gemini 3. And, Google says this is just the beginning and users can expect more dynamic visual tools and creative layouts to roll out in the upcoming months. Gemini 3 is Google's biggest swing at AI yet. It's smarter, more interactive and can actually handle even bigger complex tasks. But the real shift is how tightly Google's weaving it into Search itself. With visual tools that build themselves, simulations that run in real time and deeper reasoning happening behind the scenes, Search is completely shifting from a search engine towards an AI assistant that actually knows what you're trying to do. Gemini 3 is already available within the Gemini app and AI Mode in Search Google AI Pro and Ultra users in the US only for now. Deep Think mode launches in the coming weeks for Ultra subscribers.
[19]
Google Launches More Intelligent Gemini 3 Model
Google today introduced Gemini 3 Pro, its newest and most intelligent AI model. Google says that Gemini offers state-of-the-art reasoning, able to understand depth and nuance. It is also better at understanding the context and intent behind a request for more relevant answers. According to Google, Gemini 3 Pro is the best model in the world for multimodal understanding, outperforming Gemini 2.5 Pro on every major AI benchmark. Responses have been designed to be concise and direct, with less flattery. Google claims that it serves as a "true thought partner." Gemini 3 Pro is rolling out across Google platforms. It's been incorporated into AI mode in Search for Pro and Ultra subscribers, the Gemini app (select Thinking from the model selector), AI Studio, Vertex AI, and Google Antigravity, a new agentic development platform. AI Mode in Search will use Gemini 3 to provide new generative UI experiences like immersive visual layouts and interactive tools generated on the fly. Google AI Ultra subscribers can also use Gemini 3 with Gemini Agent as of today, with Gemini 3 able to execute multi-step workflows from start to finish. Gemini 3 Deep Think is even more intelligent, and Google says that it can solve more complex problems than Gemini 3 Pro. Gemini 3 Deep Think Mode will be available to Google AI Ultra subscribers in the coming weeks. As part of the Gemini 3 launch, Google redesigned the Gemini app to give it a more modern look. Google says that it's easier to start chats and find images, videos, and reports that you've created in a dedicated My Stuff folder. The shopping experience has been overhauled, incorporating product listings, comparison tables, and prices from Google's Shopping Graph. There are new interfaces, including a visual layout that uses photos and modules, and a dynamic view that uses agentic coding capabilities to create a custom user interface in real-time suited to a query.
[20]
I used Google's new Gemini 3 AI to make Android apps and fine-tune my workouts
Google's new Gemini 3 AI model brings big improvements to reasoning and coding. We put it to the test. As we approach the end of the year, the companies at the forefront of AI are rushing to get their latest and greatest models into the hands of consumers, developers, and businesses. Last week, OpenAI unveiled ChatGPT 5.1, a smarter model with more personality. Not to be outdone by its rival, Google has unveiled Gemini 3, the latest version of its AI model that the company says has improved reasoning, coding, and multimodal capabilities.
[21]
Gemini 3 is here - 3 things to know about the major AI update
Search will use Gemini 3 to make interactive and visual responses to your more complex questions Google has officially debuted Gemini 3, marking a major upgrade for its AI models and the platforms employing them. New reasoning modes, fresh app features, and an overhaul of the Google ecosystem that could change how people engage with Google products in day-to-day life. The new model arrives with its usual stack of benchmarks and leaderboards, but the more interesting part is what it means for those skeptical about whether they should experiment with AI. Gemini 3 comes in two flavors: Gemini 3 Pro and Gemini 3 Deep Think. Pro is the everyday, full-featured version available immediately in apps, Search, and developer tools. Deep Think is the "enhanced reasoning" mode with an extra gear, currently in testing and destined for the deep-pocketed Google AI Ultra subscribers. But both share the same core. Google is positioning Gemini 3 as a leap forward in reasoning, not just raw size or speed. The company likes to talk about AGI "steps" in cautious forecasts, but the tone around this release is plainly more confident. The company is keen to boast how well Gemini 3 can perform at everything from parsing and translating a handwritten recipe to being able to merge it with a voice note and write a cookbook from the combination. That kind of multimodality is where Gemini 3 feels most transformed. The model's video analysis now understands movement, timing, and other details. It can even analyze a sports game and suggest a training plan for players. And its million-token context window means it can keep track of sprawling, real-world information without falling apart halfway through a long session. Gemini 3 doesn't arrive in isolation. The Gemini app is getting one of its largest overhauls, with a new interface navigation system and a "My Stuff" folder for every piece of AI-generated content you've prompted. The new app should feel more helpful by default, according to Google. The most noticeable change is the new generative interfaces. Gemini 3 builds them in real time based on what you're asking for instead of just using a template. As an example, Google suggested asking for help planning a vacation might produce a magazine-style itinerary complete with visuals. Or instead of a wall of text to answer a complicated question, you'd see a visually-heavy layout of diagrams, tables, and other illustrations. The Gemini app is also getting an agent to act on your behalf. If you give Gemini a task that requires a few dozen steps, it can carry them out using your connected Google apps. The agent is starting with Google AI Ultra members and expanding from there. Gemini 3 is changing how Google handles complex questions. For the first time, a Gemini model is available in Search immediately, and Google is routing the toughest queries to it. U.S. Google AI Pro and Ultra subscribers will see Gemini 3 Pro in AI Mode, with broader access coming soon Gemini 3 improves Google's fan-out approach of searching multiple interpretations of your question to find relevant content. Gemini 3 understands intent deeply enough to discover material that earlier versions routinely missed. The most striking upgrade is the new generative UI. When you ask a complex question, Gemini 3 constructs a layout with visuals, tables, grids, and even custom-coded interactive simulations to go with the answer. A question about the three-body problem produces a manipulable model. A question about loans generates a calculator tailored to the details you've included. Answers become more like small apps. There are plenty of links to the source material as well, theoretically encouraging follow-ups by users. Google says this system will evolve, particularly as automatic model selection routes more queries to Gemini 3 behind the scenes.
[22]
Gemini 3 Pro powers up Google's AI ambitions
Why it matters: The rollout builds on Google's recent momentum, including reports that a future custom model could power Apple's Siri. * Google says Gemini 3 Pro sets new high scores on a variety of existing benchmarks, is capable of more nuanced and complex answers and generates interactive graphics in response to prompts. Driving the news: Gemini 3 Pro will roll out starting today in the Gemini app. * Gemini 3 Pro will also start powering AI Mode in search, the first time that a new model has been integrated with search at launch. * Developers will have access to Gemini 3 Pro via Google's Vertex AI and AI Studio. * Google is also introducing a new agentic tool called Antigravity that lets developers describe apps more conceptually, instead of coding it line by line. * A more advanced Gemini 3 Deep Think will remain in testing before launching first for AI Ultra subscribers. The big picture: Gemini 3 arrives amid fierce competition among the leading AI players. OpenAI released GPT-5 in August and updated it last week to 5.1, with additional personality options. * Elon Musk's xAI released Grok 4.1 on Monday, saying the update is far less likely to hallucinate than previous versions. Between the lines: The Gemini 3 rollout came later than some expected. Google said it wanted to give itself and its partners more time to test and integrate the model. * "One of the things we really wanted to optimize for with this model is putting it out broadly," Google senior director Tulsee Doshi told Axios. What we're watching: Doshi said to expect future versions of Gemini 3 optimized for running locally or delivering results that are faster and more cost-efficient.
[23]
Google unveils Gemini 3 claiming the lead in math, science, multimodal, agentic AI benchmarks
After more than a month of rumors and feverish speculation -- including Polymarket wagering on the release date -- Google today unveiled Gemini 3, its newest proprietary frontier model family and the company's most comprehensive AI release since the Gemini line debuted in 2023. The models are proprietary (closed-source), available exclusively through Google products, developer platforms, and paid APIs, including Google AI Studio, Vertex AI, the Gemini CLI, and third-party integrations across the broader IDE ecosystem. Gemini 3 arrives as a full portfolio, including: * Gemini 3 Pro: the flagship frontier model * Gemini 3 Deep Think: an enhanced reasoning mode * Generative interface models powering Visual Layout and Dynamic View * Gemini Agent for multi-step task execution * Gemini 3 engine embedded in Google Antigravity, the company's new agent-first development environment. The launch represents one of Google's largest, most tightly coordinated model releases. Gemini 3 is shipping simultaneously across Google Search, the Gemini app, Google AI Studio, Vertex AI, and a range of developer tools. Executives emphasized that this integration reflects Google's control of TPU hardware, data center infrastructure, and consumer products. According to the company, the Gemini app now has more than 650 million monthly active users, more than 13 million developers build with Google's AI tools, and more than 2 billion monthly users engage with Gemini-powered AI Overviews in Search. At the center of the release is a shift toward agentic AI -- systems that plan, act, navigate interfaces, and coordinate tools, rather than just generating text. Gemini 3 is designed to translate high-level instructions into multi-step workflows across devices and applications, with the ability to generate functional interfaces, run tools, and manage complex tasks. Major Performance Gains Over Gemini 2.5 Pro Gemini 3 Pro introduces large gains over Gemini 2.5 Pro across reasoning, mathematics, multimodality, tool use, coding, and long-horizon planning. Google's benchmark disclosures show substantial improvements in many categories. In mathematical and scientific reasoning, Gemini 3 Pro scored 95 percent on AIME 2025 without tools and 100 percent with code execution, compared to 88 percent for its predecessor. On GPQA Diamond, it reached 91.9 percent, up from 86.4 percent. The model also recorded a major jump on MathArena Apex, reaching 23.4 percent versus 0.5 percent for Gemini 2.5 Pro, and delivered 31.1 percent on ARC-AGI-2 compared to 4.9 percent previously. Multimodal performance increased across the board. Gemini 3 Pro scored 81 percent on MMMU-Pro, up from 68 percent, and 87.6 percent on Video-MMMU, compared to 83.6 percent. Its result on ScreenSpot-Pro, a key benchmark for agentic computer use, rose from 11.4 percent to 72.7 percent. Document understanding and chart reasoning also improved. Coding and tool-use performance showed equally significant gains. The model's LiveCodeBench Pro score reached 2,439, up from 1,775. On Terminal-Bench 2.0 it achieved 54.2 percent versus 32.6 percent previously. SWE-Bench Verified, which measures agentic coding through structured fixes, increased from 59.6 percent to 76.2 percent. The model also posted 85.4 percent on t2-bench, up from 54.9 percent. Long-context and planning benchmarks indicate more stable multi-step behavior. Gemini 3 achieved 77 percent on MRCR v2 at 128k context (versus 58 percent) and 26.3 percent at 1 million tokens (versus 16.4 percent). Its Vending-Bench 2 score reached $5,478.16, compared to $573.64 for Gemini 2.5 Pro, reflecting stronger consistency during long-running decision processes. Language understanding scores improved on SimpleQA Verified (72.1 percent versus 54.5 percent), MMLU (91.8 percent versus 89.5 percent), and the FACTS Benchmark Suite (70.5 percent versus 63.4 percent), supporting more reliable fact-based work in regulated sectors. Generative Interfaces Move Gemini Beyond Text Gemini 3 introduces a new class of generative interface capabilities. Visual Layout produces structured, magazine-style pages with images, diagrams, and modules tailored to the query. Dynamic View generates functional interface components such as calculators, simulations, galleries, and interactive graphs. These experiences now appear in Google Search's AI Mode, enabling models to surface information in visual, interactive formats beyond static text. Google says the model analyzes user intent to construct the layout best suited to a task. In practice, this includes everything from automatically building diagrams for scientific concepts to generating custom UI components that respond to user input. Gemini Agent Introduces Multi-Step Workflow Automation Gemini Agent marks Google's effort to move beyond conversational assistance toward operational AI. The system coordinates multi-step tasks across tools like Gmail, Calendar, Canvas, and live browsing. It reviews inboxes, drafts replies, prepares plans, triages information, and reasons through complex workflows, while requiring user approval before performing sensitive actions. On the press call, Google said the agent is designed to handle multi-turn planning and tool-use sequences with consistency that was not feasible in earlier generations. It is rolling out first to Google AI Ultra subscribers in the Gemini app. Google Antigravity and Developer Toolchain Integration Antigravity is Google's new agent-first development environment designed around Gemini 3. Developers collaborate with agents across an editor, terminal, and browser. The system orchestrates full-stack tasks, including code generation, UI prototyping, debugging, live execution, and report generation. Across the broader developer ecosystem, Google AI Studio now includes a Build mode that automatically wires the right models and APIs to speed up AI-native app creation. Annotations support allows developers to attach prompts to UI elements for faster iteration. Spatial reasoning improvements enable agents to interpret mouse movements, screen annotations, and multi-window layouts to operate computer interfaces more effectively. Developers also gain new reasoning controls through "thinking level" and "model resolution" parameters in the Gemini API, along with stricter validation of thought signatures for multi-turn consistency. A hosted server-side bash tool supports secure, multi-language code generation and prototyping. Grounding with Google Search and URL context can now be combined to extract structured information for downstream tasks. Enterprise Impact and Adoption Enterprise teams gain multimodal understanding, agentic coding, and long-horizon planning needed for production use cases. The new model unifies analysis of documents, audio, video, workflows, and logs. Improvements in spatial and visual reasoning support robotics, autonomous systems, and scenarios requiring navigation of screens and applications. High-frame-rate video understanding helps developers detect events in fast-moving environments. Gemini 3's structured document understanding capabilities support legal review, complex form processing, and regulated workflows. Its ability to generate functional interfaces and prototypes with minimal prompting reduces engineering cycles. In addition, the gains in system reliability, tool-calling stability, and context retention make multi-step planning viable for operations like financial forecasting, customer support automation, supply chain modeling, and predictive maintenance. Developer and API Pricing Google has disclosed initial API pricing for Gemini 3 Pro. In preview, the model is priced at $2 per million input tokens and $12 per million output tokens for prompts up to 200,000 tokens in Google AI Studio and Vertex AI. Gemini 3 Pro is also available at no charge with rate limits in Google AI Studio for experimentation. The company has not yet announced pricing for Gemini 3 Deep Think, extended context windows, generative interfaces, or tool invocation. Enterprises planning deployment at scale will require these details to estimate operational costs. Multimodal, Visual, and Spatial Reasoning Enhancements Gemini 3's improvements in embodied and spatial reasoning support pointing and trajectory prediction, task progression, and complex screen parsing. These capabilities extend to desktop and mobile environments, enabling agents to interpret screen elements, respond to on-screen context, and unlock new forms of computer-use automation. The model also delivers improved video reasoning with high-frame-rate understanding for analyzing fast-moving scenes, along with long-context video recall for synthesizing narratives across hours of footage. Google's examples show the model generating full interactive demo apps directly from prompts, illustrating the depth of multimodal and agentic integration. Vibe Coding and Agentic Code Generation Gemini 3 advances Google's concept of "vibe coding," where natural language acts as the primary syntax. The model can translate high-level ideas into full applications with a single prompt, handling multi-step planning, code generation, and visual design. Enterprise partners like Figma, JetBrains, Cursor, Replit, and Cline report stronger instruction following, more stable agentic operation, and better long-context code manipulation compared to prior models. Rumors and Rumblings In the weeks leading up to the announcement, X became a hub of speculation about Gemini 3. Well-known accounts such as @slow_developer suggested internal builds were significantly ahead of Gemini 2.5 Pro and likely exceeded competitor performance in reasoning and tool use. Others, including @synthwavedd and @VraserX, noted mixed behavior in early checkpoints but acknowledged Google's advantage in TPU hardware and training data. Viral clips from users like @lepadphone and @StijnSmits showed the model generating websites, animations, and UI layouts from single prompts, adding to the momentum. Prediction markets on Polymarket amplified the speculation. Whale accounts drove the odds of a mid-November release sharply upward, prompting widespread debate about insider activity. A temporary dip during a global Cloudflare outage became a moment of humor and conspiracy before odds surged again. The key moment came when users including @cheatyyyy shared what appeared to be an internal model-card benchmark table for Gemini 3 Pro. The image circulated rapidly, with commentary from figures like @deedydas and @kimmonismus arguing the numbers suggested a significant lead. When Google published the official benchmarks, they matched the leaked table exactly, confirming the document's authenticity. By launch day, enthusiasm reached a peak. A brief "Geminiii" post from Sundar Pichai triggered widespread attention, and early testers quickly shared real examples of Gemini 3 generating interfaces, full apps, and complex visual designs. While some concerns about pricing and efficiency appeared, the dominant sentiment framed the launch as a turning point for Google and a display of its full-stack AI capabilities. Safety and Evaluation Google says Gemini 3 is its most secure model yet, with reduced sycophancy, stronger prompt-injection resistance, and better protection against misuse. The company partnered with external groups, including Apollo and Vaultis, and conducted evaluations using its Frontier Safety Framework. Deployment Across Google Products Gemini 3 is available across Google Search AI Mode, the Gemini app, Google AI Studio, Vertex AI, the Gemini CLI, and Google's new agentic development platform, Antigravity. Google says additional Gemini 3 variants will arrive later. Conclusion Gemini 3 represents Google's largest step forward in reasoning, multimodality, enterprise reliability, and agentic capabilities. The model's performance gains over Gemini 2.5 Pro are substantial across mathematical reasoning, vision, coding, and planning. Generative interfaces, Gemini Agent, and Antigravity demonstrate a shift toward systems that not only respond to prompts but plan tasks, construct interfaces, and coordinate tools. Combined with an unusually intense hype and leak cycle, the launch marks a significant moment in the AI landscape as Google moves aggressively to expand its presence across both consumer-facing and enterprise-facing AI workflows.
[24]
Google releases its heavily hyped Gemini 3 AI in a sweeping rollout | Fortune
Google released its Gemini 3 AI model today after weeks of social-media hype, vague posting, and wink emojis. An early-morning leak of the Pro version's model card -- which outlines key details about the system and its benchmark performance -- had developers posting on X as though Santa had arrived early. Even former OpenAI researcher and co-founder Andrej Karpathy joked about the buildup: "I heard Gemini 3 answers questions before you ask them. And that it can talk to your cat," he wrote on X. It remains to be seen, of course, whether the model lives up to the hype that it would, as one X user put it, "absolutely crush" all other state-of-the-art models, including OpenAI's GPT-5 and Anthropic's Claude Sonnet 4.5/Opus 4 and xAI's Grok 4. But what is clear is that Google's confident, widespread release of Gemini 3, in Pro and "Deep Think" versions, is a long way from its tentative debut of the first Gemini model in February 2024 -- when the company faced intense backlash over "woke" outputs and ahistorical or inaccurate images and text, ultimately admitting it had "missed the mark." Its Gemini-powered AI Overviews in Search also triggered an online furor after the system famously told users to eat glue and rocks. This time around, Gemini 3 is getting a sweeping day-one rollout across a large swath of Google's ecosystem with its billions of users -- including its fastest-ever deployment into Google Search. "This is the very first time we're shipping our latest Gemini model in search," said Robby Stein, vice president of product for Google Search, in a press preview. That includes Google's AI Pro and Ultra subscribers getting access to Gemini 3 in Search's AI Mode, with new visual layouts featuring interactive tools and elements like images, tables and grids. Google also benefits from the fact that, unlike during past AI rollouts, OpenAI didn't manage to steal its thunder this time. OpenAI already debuted its massively hyped GPT-5 model in August -- a release many observers said fell short and was underwhelming. Last week, the company released a 5.1 update it described as "smarter" and "more conversational," with eight different "personalities" to choose from, but that still left the door wide open for Google to make Gemini waves. In a blog post introducing Gemini 3, Alphabet and Google CEO Sundar Pichai boasted that Google's AI Overviews now has 2 billion users per month, while the Gemini app has more than 650 million active monthly users, and more than 13 million developers are building with Gemini, Google's "most intelligent model." Today, he wrote, "we're shipping Gemini at the scale of Google." Google also crowed about the model's results on major AI industry benchmarks, saying that it beat the earlier Gemini 2.5 Pro on every major test of reasoning. It said the model performs extremely well on academic-style challenges testing logic, math, science, and problem-solving, reaching scores that Google claimed resemble "PhD-level" reasoning. It also said the model improved on factual accuracy. Google also claimed the model is more thoughtful and useful in conversation: Instead of giving generic flattery or buzzword-filled answers -- much-disliked features of many chatbot responses -- it's supposed to offer clearer, more direct insight. In addition, Google said Gemini 3 has "undergone the most comprehensive set of safety evaluations of any Google AI model to date," adding that the model shows "reduced sycophancy, increased resistance to prompt injections and improved protection against misuse via cyberattacks." Over the past year, AI security experts have shared many examples of Gemini's vulnerability to prompt injections, in which attackers manipulate the model by embedding malicious instructions into its input, and other types of threats. Amid rising publisher fears that Google's AI Overviews are causing a "traffic apocalypse" that kills click-throughs to news sites, Google continued to insist that it will keep connecting users to publisher content. That reassurance comes despite research showing that users are less likely to click on result links when an AI summary appears -- and that when AI summaries do surface sources, users rarely click through to them. "We continue to send billions of clicks to the web every day, and we're prominently highlighting the web in our Search AI experiences in a way that encourages onward exploration," a Google spokesperson told Fortune by email. "As always, we continue to prominently display links to the web throughout the AI Mode response, so people can continue learning and exploring." Google also pointed to its "query fan-out technique" -- essentially, taking a user's single question and breaking it into many smaller, behind-the-scenes searches to gather more relevant information. "Now, not only can it perform even more searches to uncover relevant web content, but because Gemini more intelligently understands your intent, it can find new content that it may have previously missed," the spokesperson said. "This means Search can help you find even more highly relevant web content for your specific question." No matter how Gemini 3 is received, there's little question that Google is far ahead of where it stood less than three years ago, when ChatGPT's arrival sparked an internal "code red." The company is also playing to its strengths, looking directly at what its billions of consumers want -- including unveiling first-of-its-kind generative shopping interfaces in the Gemini app with product listings, comparison tables and live pricing pulled from Google's 50-billion-item Shopping Graph. That, of course, is Google's not-so-secret sauce: the massive amounts of data that flows through its products every day. And Gemini 3 is yet another reminder that few companies, if any, have the data foundation or the global reach to ship AI at this scale. Still, even Pichai, the company's CEO, is still urging caution when it comes to AI. In a new interview with the BBC, he said people should not "blindly trust" everything AI tools tell them, adding they are "prone to errors" and urging people to use them alongside other tools. Pichai also warned that no company would be immune if the AI bubble burst. Presumably even Google.
[25]
Gemini 3 is live and ready to show the next leap in AI
What's happened? Google has announced the rollout of Gemini 3, stating it is its most capable and intelligent AI model yet. The model handles text, images, and audio simultaneously, meaning you could show a photo, ask about it, and hear or read a detailed answer, all in one go. It's also available immediately in the Gemini app for Pro users and is being integrated into Google Search. * Gemini 3 Pro is described by Google as "natively multimodal," supporting tasks like turning recipe photos into full cookbooks or generating interactive study tools from video lectures. * Google says the model has improved reasoning capabilities, better task-planning, and reduced "sycophancy" (i.e., less flattery and more direct answers) compared to past versions. * The launch comes with new tools like Google's Antigravity coding platform, which uses Gemini 3 Pro to automate workflows and document every step via artifacts. Why this is important: This launch signals a major shift in how we might interact with AI. With Gemini 3's multimodal ability, you're no longer limited to typing questions. Instead, you can show images, talk to it, and play audio, all in the same session. That opens doors for smarter assistants, better content generation, and workflows that really fit how we think and work. For developers, businesses, and Google itself, this model sets the stage for a new wave of AI-powered tools. Recommended Videos If Gemini 3 works well in real-world use, it could redefine expectations around virtual assistants, creative tools, and search itself. Moreover, by reducing errors, improving reasoning, and integrating across tools (like Search and coding environments), Google is positioning AI not just as an assistant, but as something proactively helpful. That means the AI you engage with could become more capable, contextual, and tailored to you. Why should I care? If you use AI tools, create digital content, or rely on search and productivity apps, Gemini 3 could noticeably shift your day-to-day experience. It's not just a speed bump; it's a broader upgrade in what Google's AI can understand and produce. * Better answers: With stronger reasoning and multimodal input handling, interactions can feel quicker, more natural, and more accurate. * Smarter workflows: Whether coding, researching, or working on creative projects, the tools around you may feel smoother and more capable, cutting down on the small frustrations that slow you down. * Platform shifts: As Google weaves Gemini 3 deeper into Search, Workspace, and other apps, expect familiar features to quietly evolve, even if you don't notice the change right away. In short, even if you don't "see" Gemini 3 directly, you'll likely feel its influence as it becomes the engine behind more of Google's ecosystem. It builds on Gemini 2.5's foundation but with sharper reasoning, better instruction-following, and more stable multimodal performance. Tasks that previously tripped up 2.5, like maintaining context or juggling multiple images, are handled more smoothly here, resulting in an upgrade that feels less like a version step and more like Google redefining how its AI assistant should behave. And here's where things get even more interesting: Gemini 3 Pro isn't just better on paper, but it's also scoring noticeably higher across various AI benchmarks. These gains show up in areas like long-form reasoning, code generation, and complex multimodal tasks. In real use, that translates to fewer moments where the model loses track of what you're asking, a higher chance of getting the answer you actually wanted, and more stable performance when juggling multiple files, images, or steps. Okay, so what's next? If you're using the free version, you can start experimenting with Gemini 3 today, as it's already live across the Gemini app and in AI Mode in Search. This means you can test its improved reasoning, multimodal input (text, images, etc.), and more intuitive prompts to see how it works for your daily tasks. For Pro (and Ultra) users, there's more in store: you'll get access to the full Gemini 3 Pro model's advanced capabilities (stronger reasoning, deeper context handling, richer multimodal responses) and soon the new "Deep Think" mode which is designed for the most complex workflows. All of this means that if you upgrade, you'll experience a higher-tier version of Gemini that's more powerful and responsive.
[26]
Google launches Gemini 3 with state-of-the-art reasoning, 'generative UI' for responses, more
Google today announced Gemini 3 with the goal of bringing "any idea to life." The first model available in this family is Gemini 3 Pro with the rollout starting today for the Gemini app and AI Mode. With Gemini 1.0, Google focused on native multimodality and the long context window. A year later Gemini 2.0 brought advanced reasoning and the beginning of agentic capabilities, while Gemini 2.5 introduced deep reasoning and coding capabilities. Gemini 3 -- which drops the ".0" -- is Google's "most intelligent model" and positioned as helping you "bring any idea to life." It starts by getting better at figuring out the context and intent of your request, so that "you get what you need with less prompting." Gemini 3 is state-of-the-art in reasoning with the ability to "grasp depth and nuance," like "perceiving the subtle clues in a creative idea, or peeling apart the overlapping layers of a difficult problem." Gemini 3 Pro responses aim to be "smart, concise, and direct, trading cliche and flattery for genuine insight." It acts as a true thought partner that gives you new ways to understand information and express yourself, from translating dense scientific concepts by generating code for high-fidelity visualizations to creative brainstorming. Gemini 3 Pro has a score of 1501 on LMArena and surpasses 2.5 Pro (1451), which still had the top position. It outperforms the model its replacing in all major benchmarks by a significant margin: This means Gemini 3 Pro is highly capable at solving complex problems across a vast array of topics like science and mathematics with a high-degree of reliability. Google today also announced the Gemini 3 Deep Think mode with even better reasoning and multimodal understanding. It outperforms Gemini 3 Pro on Humanity's Last Exam (41.0% without the use of tools) and GPQA Diamond (93.8%). This will be available in the coming weeks for AI Ultra subscribers. It also achieves an unprecedented 45.1% on ARC-AGI (with code execution), demonstrating its ability to solve novel challenges. Gemini 3 makes possible generative UI (or generative interfaces) wherein LLMs generate both content and entire user experiences. This includes web pages, games, tools, and applications that are "automatically designed and fully customized in response to any question, instruction, or prompt. This work represents a first step toward fully Al-generated user experiences, where users automatically get dynamic interfaces tailored to their needs, rather than having to select from an existing catalog of applications. Behind-the-scenes, Gemini 3 Pro leverages tool access like web search and image generation, as well as "carefully crafted system instructions." The system is guided by detailed instructions that include the goal, planning, examples and technical specifications, including formatting, tool manuals, and tips for avoiding common errors. Finally, the output is sent through post-processors that address "potential common issues." This is launching today in the Gemini app as experiments. Dynamic view sees Gemini 3 design and code a "fully customized interactive response for each prompt." It customizes the experience with an understanding that explaining the microbiome to a 5 year old requires different content and a different set of features than explaining it to an adult, just as creating a gallery of social media posts for a business requires a completely different interface to generating a plan for an upcoming trip. Visual layout is the second experiment and creates an "immersive, magazine-style view complete with photos and modules." The main difference to dynamic view is how Gemini will generate sliders, checkboxes, and other filters that let you customize the results further. You might initially only see one of these experiments at a time to allow Google to gather feedback. For more on what Gemini 3 brings to the Gemini app (including Gemini Agent), read our story here. Meanwhile, this is the first time that a new model is coming to Google Search and AI Mode alongside the Gemini app. Starting this week, AI Pro and AI Ultra subscribers can go to the dropdown menu in the top-left corner and select "Thinking: 3 Pro reasoning and generative layouts." With Gemini 3, Google's query fan-out technique can perform additional searches than before that ask more nuanced questions to improve the final response you get. AI Mode will also create generative UIs to creative interactive tools and simulations. For example, Google might build a mortgage calculator that lets you change interest rates and down payment. Another is getting a physics simulation when you're learning about topics. Gemini 3 will next come to all (free) AI Mode users in the US, with subscribers getting higher limits. Looking ahead, Google in the coming weeks will update Search's automatic model selection for subscribers to send challenging questions to Gemini 3 "while continuing to use faster models for simple tasks." With Gemini 3, Google Antigravity was announced as a new agentic development platform that allows developers to "operate at a higher, task-oriented level." This IDE sees agents work across the editor, terminal, and browser. Available now on Mac, Windows, and Linux, it uses Gemini 3, Gemini 2.5 Computer Use, and Nano Banana.
[27]
Gemini 3 brings upgraded smarts and new capabilities to the Gemini app
Gemini Agent: A new tool that orchestrates and completes complex, multi-step tasks on your behalf, rolling out to Google AI Ultra members first. Gemini 3 sets a new bar for AI model performance. You'll notice the responses are more helpful, better formatted and more concise. But it's not just the response formatting that's improved. The entire experience has gotten smarter. It's our best vibe coding model ever, so the apps you build in Canvas will be more full-featured. It's the best model in the world for multimodal understanding, so whether you're uploading a photo of your homework to ask for extra help, or transcribing notes from a lecture you missed, the Gemini app is ready. Gemini 3 Pro is rolling out globally starting today. To use it, simply select "Thinking" from the model selector. Our Google AI Plus, Pro and Ultra subscribers will continue to enjoy higher limits. We're also extending our free year of Google AI Pro to U.S. college students, ensuring they have access to the best of Google AI, including Gemini 3.
[28]
Google Releases Its Most Powerful AI Model, Gemini 3 -- Here's What You Need to Know - Decrypt
Google released Gemini 3 Pro in a public preview today, calling it the company's most capable AI model to date. The system handles text, images, audio, and video simultaneously while processing up to 1 million tokens of context -- roughly equivalent to 700,000 words, or about 10 full-length novels. The preview model is available for free for anyone to try here. Google said the model outperformed its predecessor, Gemini 2.5 Pro, across nearly every benchmark the company tested. On Humanity's Last Exam, an academic reasoning test, Gemini 3 Pro scored 37.5% compared to 2.5 Pro's 21.6%. On ARC-AGI-2, a visual reasoning puzzle benchmark, the gap widened further: 31.1% versus 4.9%. Of course, the real challenge at this point in the AI race isn't technical so much as it is gaining commercial market share. Google, which once seemed indomitable in the search space, has given up an enormous amount of ground to OpenAI, which claims some 800 million weekly users ChatGPT versus Gemini, which reportedly has around 650 million monthly users. Google has not said how many weekly numbers it has, but that would be far fewer than its monthly count. Still, the technical achievements of Gemini 3 are impressive. Gemini 3 Pro uses what Google calls a sparse mixture-of-experts architecture. Instead of activating all 1 trillion-plus parameters for every query, the system routes each input to specialized subnetworks. Only a fraction of the model -- the expert at that specific task -- runs at any given time, cutting computational costs while maintaining performance. Unlike GPT and Claude, which are large, dense models (a jack of all trades), Google's approach acts like a large organization would operate. A company with 1,000 employees doesn't call everyone to every meeting; specific teams handle specific problems. Gemini 3 Pro works the same way, directing questions to the right expert networks. Google trained the model on web documents, code repositories, images, audio files, and video -- plus synthetic data generated by other AI systems. The company filtered the training data for quality and safety, removing pornographic content, violent material, and anything violating child safety laws. Training happened on Google's Tensor Processing Units using JAX and ML Pathways software. A quick test of the model showed that it was very capable. In our usual coding test asking to generate a stealth game, this was the first model that generated a 3D game instead of a 2D experience. Other runs provided 2D versions, but all were functional and fast. This approach follows the style of ChatGPT or Perplexity which encourage further interactions by sharing follow-up questions and suggestions, but Google's implementation is a lot cleaner and more helpful. While generating code, the interface provides tips to help in subsequent prompts, so the user can guide the model into generating better code, fixing bugs, and improving the app's logic, UI, etc. It also gives users the option to deploy their code and code Gemini-powered apps. Overall, this model seems to be especially focused on coding tasks. Creativity is not its strong point, but it can be easy to guide with a good system prompt and examples, as it has a very large token context window. An archived version of Gemini 3's model card -- a document that provides essential information about the model's design, intended use, performance, and limitations -- published by Google DeepMind shows that Gemini 3 Pro can generate up to 64,000 tokens of output and maintains a knowledge cutoff of January 2025. Google acknowledged the model may hallucinate and occasionally experiences slowness or timeouts. An official model card is not currently available. As mentioned, Google AI Studio is currently offering everyone free access to Gemini 3 Pro. Vertex AI and the Gemini API also support the model. Gemini 3 Pro is not yet available through the Gemini app, however -- not even for paying Gemini Pro subscribers. The November release positions Google against Anthropic's Claude Sonnet 4.5, Grok 4.1 and even OpenAI's GPT-5.1. Benchmark scores suggest Gemini 3 Pro leads in reasoning and multimodal tasks, though real-world performance varies by use case. Google distributed Gemini 3 Pro through its cloud platforms subject to existing terms of service. The company's generative AI prohibited use policy applies, blocking use in dangerous activities, security compromises, sexually explicit content, violence, hate speech, and misinformation.
[29]
Google's New Gemini Pro Features Are Out, but Most of Them Will Cost You
Free users have access to limited daily generations in the Gemini app or on the Gemini website. Google has officially launched its Gemini 3 Pro model, and for the first time, it's already making its way directly to the public, without making you wait months after the announcement to actually get your hands on it. On top of introducing a few new features, Google says the model has the typical increased accuracy, but also, that it finally cuts down on some of the excessive people-pleasing flattery that drives me nuts when using AI. The catch? Most features are paywalled, and one is still in the oven. For the more obvious improvements, Google says Gemini 3 Pro is at the top of LMArena with a score of 1501 points, while also demonstrating "PhD-level reasoning" on Humanity's Last Exam, earning a 37.5% score without the use of tools. Math buffs will also be happy to hear that the new model scored 91.9% on GPQA Diamond and 23.4% on MathArena Apex. But for everyone else, the real excitement comes from the brand new things you can do in Gemini 3 Pro. This isn't just a performance bump. Instead of just being Gemini 2.5 but better, Gemini 3 Pro also debuts three new consumer-facing features, plus a new platform for developers. It's a much meatier release than, say, ChatGPT 5.1 -- assuming you're willing to pay up. Google's press release bounces back and forth between calling this feature "Generative UI" and "Generative Interfaces," although I prefer the former. Essentially, it's supposed to make your AI results easier to read. It's one of the few new features that's freely open to everyone, although Google has two different approaches to it, and not everyone might see the same one. The first is called "visual layout," and is more similar to Gemini's current results pages. Essentially, when you enter a complex, multi-layered prompt, like "plan a three-day trip to Rome next summer," you'll now get an explorable visual itinerary, rather than static text. This will include photos and clickable modules, but remain within the Gemini interface you've gotten used to. It may also have interactive sliders and buttons for further refining your search, but Google says the idea is to give you an "immersive, magazine-style view." The second, then, is more like an on-demand webpage. It's called "dynamic view," and basically uses agentic coding to generate an on-the-fly app to help you learn more about a topic. This will include generated text and imagery, but may appear quite different from the Gemini interface you've gotten used to. An example in the press release sees Google generating a dynamic view response to "explain the Van Gogh Gallery with life context for each piece," which creates a scrollable page with a clickable header, the art justified on the left, and scrollable text on the right with clickable subheadings and pull quotes, all with custom font and design that can differ pretty wildly from Gemini's other output. One of the issues I've had with lengthy plain-text AI responses is that they can get a bit tiresome to skim, and Google's hoping this will help deal with that. Still, it hasn't settled on an approach. Both visual layout and dynamic view are rolling out today, to free and paying customers, but the company says that "to help us compare these experiments, you may initially see only one of them." As a contrast to Generative UI, Gemini 3 Deep Think is behind a hefty paywall, and still in the oven. The feature is an evolution on the existing Gemini 2.5 Deep Think mode, and essentially allows the AI to take more time to answer a question so it can better reason out an appropriate response. It's similar to existing free deep research modes across Gemini and other AI apps, but more broadly applicable, and Google says it's great for use cases like intricate graphic design or coding. Unfortunately, it's only in the hands of a few "safety testers" for now. Google told press it will start to release "in the coming weeks," but even then, it'll be limited to Google AI Ultra subscribers. Since Google AI Ultra is $250/month (although it starts at $125/month for the first three months), that's a pricey proposition. Another Google AI Ultra-exclusive feature, Gemini Agent will start rolling out to subscribers today, and is similar to the company's recent AI shopping push. The idea is for it to take action for you, to help you handle multi-step tasks without having to leave Gemini. The key thing is that it works with Google's other apps. So, for instance, you could ask it to help organize your Gmail inbox, and it'll separate out your pending emails into categories and submit emails it thinks you could delete for your approval. Or, you could ask it to help you rent a car under a certain budget for an upcoming trip, and it'll scan your emails for flight and hotel details, then search for an appropriate booking and reach out to you before finalizing it. It's got access to the web, Google Workspace, and other AI tools like Canvas, so in theory, it can pull from pretty much every resource Google has to help you answer your question. Google also says it will "seek confirmation before critical actions like making purchases or sending messages, and you can take over any time." If it works, it sounds like the type of virtual secretary most people probably thought of when these companies started talking about AI just a few years ago. But with such a high monthly price tag, it's probably only for the highest of rollers for now. So, kind of like a regular secretary, I guess. Finally, this feature is intended more for developers than the average internet user, but it's worth bringing up, if only because it's free. Called Google Antigravity, it's a new development platform focusing on agentic coding, AKA having the AI generate code for you. It's more complicated than that -- you're able to freely browse and edit generated code -- but the idea is to make it easier to use Gemini as a development partner. It's not Gemini's first development tool, but the idea is to give developers a dedicated AI workspace. To that end, Antigravity can pull from existing features like Canvas, as well as implement "browser control capabilities" and "asynchronous interaction patterns" to achieve an "agent-first product form factor" that can "autonomously plan and execute complex, end-to-end software tasks." I'm sure the people who would use Antigravity will know what all that means. To a layperson like me, it seems like the big improvement is that it's one app that you can use to go straight from ideation to publishing, rather than having to bounce between several different Gemini tools. What's probably more interesting, though, is that Antigravity is free, "with generous rate limits on Gemini 3 Pro usage." Yep, developing with Gemini is actually now cheaper than using Gemini, depending on what you want to do. At least some of Gemini 3 Pro is available to everyone. To try Gemini 3 Pro for yourself, open the Gemini app or webpage and select "Thinking" from the model selector underneath your prompt. Google told me free users will have up to five Gemini 3 Pro prompts per day, while AI Plus, Pro, and Ultra subscribers will "enjoy higher limits." Google AI Pro and Ultra subscribers will also be able to use Gemini 3 Pro right from Google search's AI Mode, also by selecting "Thinking" from the model selector (next to the "AI Mode" button). Free users will have to stick to the Gemini app for now, unfortunately, but Google says that will change "soon." On the plus side, AI Mode will still be able to generate visual layouts and dynamic views, assuming you have access to Gemini 3 Pro within it. AI Overviews will also start to use Gemini 3 Pro for AI Pro and Ultra subscribers, although that's set for "the coming weeks." When it arrives, AI Mode will also be upgraded to automatically send your hardest questions to Gemini 3 Pro, without you having to pick it in the model selector (although you can continue to manually pick older models if you prefer). Personally, I'm glad to see Google releasing Gemini 3 Pro alongside concrete new features, instead of just making "AI, but better." At the same time, because most of this requires a subscription, it's clear we're still a little while away from that wide AI adoption the tech industry still seems to be clamoring for.
[30]
Gemini 3 gets the power to shape Search results for maximum impact
Access gets started with AI Pro and AI Ultra subscribers in the US. Google's latest AI advancements are making their debut today, with Gemini 3 arriving to showcase its next-gen reasoning and agentic skills. We're getting our first look at those upgrades across the many Google services that already tap into Gemini, and for many of us the one we're going to be interacting with the most has got to be Search. With Gemini 3, Google Search promises better performance, deeper, more targeted results, and a visual overhaul to how it presents information.
[31]
Google's Gemini 3 AI model makes its long-awaited debut, crushing rivals on top benchmarks - SiliconANGLE
Google's Gemini 3 AI model makes its long-awaited debut, crushing rivals on top benchmarks Google LLC has come up with the perfect response to the bevy of artificial intelligence announcements at Microsoft Ignite this week, launching its most intelligent model: Gemini 3. The launch of Gemini 3 has been hotly anticipated for several months, and Google is doing everything it can to ensure that users won't be disappointed. As the company's smartest model yet, Google says, it possesses industry-leading reasoning capabilities and the ability to learn almost any new concept. It's being made broadly accessible from the get-go, launching in the Gemini application, Google Search and developer platforms such as AI Studio, Vertex AI and Google Antigravity - an all new agentic development platform. Smarter, more thoughtful In a blog post, Google DeepMind Chief Executive Demis Hassabis claimed that Gemini 3 is the "best model in the world for multimodal understanding," as well as one of the most capable vibe coding models ever released. This is based on its high-level reasoning capabilities, which enable Gemini 3 to outperform its predecessor Gemini 2.5 on every major AI benchmark. Gemini 3 has jumped straight to the top of the LMArena Leaderboard with an all-time high score of 1,501 points, Hassabis said, and it also demonstrates Ph.D.-level reasoning capabilities with a 37.5% score on Humanity's Last Exam and a 91.9% score on GPQA Diamond. Moreover, it excels at mathematics, achieving a new high score of 23.4% on the MathArena Apex benchmark, while its factual accuracy also sets a new standard for LLMs, with a 72.1% score on SimpleQA Verified. Hassabis said users will immediately notice that Gemini 3 brings a lot more nuance and depth to every interaction it has with them, with smarter, more concise and direct responses, "trading cliche and flattery for genuine insight". Google has worked hard to ensure Gemini 3 tells users what they need to hear, rather than what they want to hear, Hassabis explained, allowing it to act more like a "true thought partner". And if Gemini 3 doesn't prove smart enough when pressed with more complex queries, users will be able to activate a new "Deep Think" mode, which enhances its reasoning skills by giving it more time to ponder before it generates a response. In addition, Gemini 3 boasts a unique ability to synthesize information about any topic across multiple modalities at once, including text, video, images, code and audio. "For example if you want to learn how to cook in your family tradition, Gemini 3 can decipher and translate handwritten recipes in different languages into a shareable family cookbook," Hassabis said. Alternatively, if people wants to educate themselves about any new topic they're unfamiliar with, they can feed Gemini 3 with various academic papers, video-based lectures and tutorials, and it will come up with a concise lesson, including interactive flashcards and visualizations, to help the user digest the material more easily. In addition, anyone who explores the AI Mode in Search will be treated to new Gemini 3-enabled experiences, such as immersive visual layouts, interactive tools and simulations, which will be quickly generated based on their query. Higher-level agentic coding Developers will likely be among the most avid users of Gemini 3, for Hassabis claims it's now the industry's best vibe coding model, thanks to its ability to handle more complex prompts and instructions. His claim is backed up by yet more benchmarks, with the model achieving a score of 1,487 on ELO and 54.2% on Terminal-Bench 2.0, which tests models for their ability to use a computer via the terminal. Hassabis said developers will be able to get started writing code with Gemini 3 from today in AI Studio, Vertex AI and the Gemini Command Line Interface tool, as well as third-party developer tools such as Cursor, GitHub, JetBrains, Replit and Manus. However, the most exciting way to access Gemini 3 may be with Google Antigravity, which is a new agentic development platform that supports agentic automation at a much higher, task-oriented level. Antigravity promises to transform developer productivity by evolving AI from an assistant into a true partner that's able to perform work on its own initiative. The platform looks much like a typical coding environment, but the agents have been granted direct access to the editor, the terminal and a web browser, giving them all of the tools they need to autonomously plan and execute complex programming tasks on behalf of humans. The Antigravity platform is powered by Gemini 3, and it's also tightly integrated with Google's Gemini 2.5 Computer Use model that provides advanced browser control, as well as its best image editing model, Nano Banana. Stronger security As always with new AI models, there's a security angle, and Google claims that Gemini 3 is also the most secure it has yet released. It said the model has undergone the most extensive set of evaluations of any of its models, being tested against critical domains in its own Frontier Safety Framework, and also by third-party organizations such as Apollo, Vaultis and Dreadnode. The results of those tests show that Gemini 3 demonstrates significantly reduced sycophancy and higher resistance to prompt injection attacks and malware-based attacks, Hassabis said. Meanwhile, Gemini 3's Deep Think mode is currently undergoing additional safety evaluations and input safety tests, prior to being made available to Google AI Ultra subscribers in the coming weeks.
[32]
Google Launches Gemini 3, Claims Benchmark Lead Over GPT-5.1 and Claude Sonnet 4.5 | AIM
Google on Tuesday announced Gemini 3, calling it another big step on the path toward AGI. "It's state-of-the-art in reasoning, built to grasp depth and nuance -- whether it's perceiving the subtle clues in a creative idea, or peeling apart the overlapping layers of a difficult problem," said Google CEO Sundar Pichai in a statement. Google said it is rolling out Gemini 3 across its major products, including Search. The model is now live in AI Mode in Search with expanded reasoning capabilities and new dynamic experiences. The model is also available in the Gemini app, as well as to developers through AI Studio, Vertex AI and Google's new agent-focused development platform, Google Antigravity. Demis Hassabis, CEO of Google DeepMind, and Koray Kavukcuoglu, the company's CTO and chief AI architect, announced in a joint statement that Gemini 3 Pro is now available in preview. "We're beginning the Gemini 3 era," they said, noting that the model is being integrated into Search, Workspace, the Gemini app and developer platforms. Google said Gemini 3 Pro outperforms Gemini 2.5 Pro, OpenAI GPT-5.1 and Claude Sonnet 4.5 across major AI benchmarks, including LMArena, Humanity's Last Exam, GPQA Diamond and MathArena Apex. The company highlighted improvements in multimodal capabilities, citing scores of 81% on MMMU-Pro and 87.6% on Video-MMMU. It also recorded 72.1% on SimpleQA Verified, a measure of factual accuracy. The launch also introduced Gemini 3 Deep Think, an improved reasoning mode. Google said it scores 41% on Humanity's Last Exam, 93.8% on GPQA Diamond and 45.1% on ARC-AGI-2 with code execution. "Deep Think pushes the boundaries of intelligence even further," the company said. With broader multimodal input, longer context and new planning abilities, Google said users can apply Gemini 3 to tasks such as analysing research papers, translating handwritten family recipes, generating visualisations, or evaluating sports performance. In Search, AI Mode now supports generative UI elements and interactive simulations. For developers, Google launched Google Antigravity, an agent-first development platform built around Gemini 3. The company said Antigravity allows agents to "autonomously plan and execute complex, end-to-end software tasks" with direct access to an editor, terminal and browser. Gemini 3 also integrates with tools including Google AI Studio, Vertex AI, Gemini CLI, Cursor, GitHub, JetBrains and Replit. The model's long-horizon planning was cited as another improvement. Google said Gemini 3 Pro leads the Vending-Bench 2 leaderboard, sustaining consistent decision-making over a simulated year of operations. Subscribers to Google AI Ultra can access these agentic capabilities through Gemini Agent in the Gemini app. Google emphasised expanded safety testing, saying Gemini 3 has undergone its most extensive evaluations to date, including assessments by external partners such as Apollo, Vaultis and Dreadnode. "Gemini 3 is our most secure model yet," the company said, noting reduced sycophancy, better prompt-injection resistance and stronger protection against misuse.
[33]
Gemini 3 Can Help You Learn, Build Anything
We may earn a commission when you click links to retailers and purchase goods. More info. Google is introducing Gemini 3, its newest and most intelligent model, but is also making news by shipping it on day 1 to Search users. This is a first for Google, meaning a lot of people now have access to it. So what makes Gemini 3 so special? Benchmark numbers mean hardly anything to us, but Google says that Gemini 3 combines all of Gemini's capabilities together. It says Gemini 3 is much better at figuring out the context and intent behind a request, which sometimes AI would have a hard time with. At its core, Gemini 3 can "learn anything" and "build anything." Google provides a few examples of things you can do with this power. AI Mode in Search is getting Gemini 3 access, enabling new generative UI experiences such as "immersive visual layouts" and "interactive tools and simulations." These results will be tailored to your specific query. For building, developers should appreciate the improvements Gemini 3 brings. Google says it has top scores on the WebDev Arena leaderboard at 1487 Elo, 54.2% on Terminal-Bench 2.0, and outperforms Gemini 2.5 Pro on SWE-bench Verified (76.2%). Whether you're building a website, a game, or an app, Gemini 3 should be more helpful than previous iterations. Google says Gemini 3 is available inside of AI Studio now. Gemini Agent: This is an experimental feature that handles multi-step tasks directly inside Gemini. It connects to Google apps to manage different aspects, such as your calendar and email inbox. For example, you can ask it to, "Research and help me book a mid-size SUV for my trip next week under $80/day using details from my email." Gemini will then locate the email with flight information, compare rentals within your proposed budget and prepare the booking for you to look over. This function will first be available to Google AI Ultra subscribers, starting today. For those wanting to get their hands on Gemini 3, it's available today inside of the Gemini app. Google AI Pro and Ultra subscribers can access it also in AI Mode in Search, while developers can access it today inside of the Gemini API in AI Studio.
[34]
Start building with Gemini 3
Today we are introducing Gemini 3, our most intelligent model that can help bring any idea to life. Built on a foundation of state-of-the-art reasoning, Gemini 3 Pro delivers unparalleled results across every major AI benchmark compared to previous versions. It also surpasses 2.5 Pro at coding, mastering both agentic workflows and complex zero-shot tasks. Gemini 3 Pro fits right into existing production agent and coding workflows, while also enabling new use cases not previously possible. It's available in preview at $2/million input tokens and $12/million output tokens for prompts 200k tokens or less through the Gemini API in Google AI Studio and Vertex AI for enterprises (see pricing for rate limits and full pricing details). Additionally, it can be utilized via your favorite developer tools within the broader ecosystem and is available, with rate limits, free of charge in Google AI Studio.
[35]
Google unveils Gemini's next generation, aiming to turn its search engine into a 'thought partner'
SAN FRANCISCO (AP) -- Google is unleashing its Gemini 3 artificial intelligence model on its dominant search engine and other popular online services in the high-stakes battle to create technology that people can trust to enlighten them and manage tedious tasks. The next-generation model unveiled Tuesday comes nearly two years after Google took the wraps off its first iteration of the technology. Google designed Gemini in response to a competitive threat posed by OpenAI's ChatGPT that came out in late 2022, triggering the biggest technological shift since Apple released the iPhone in 2007. Google's latest AI features initially will be rolled out to Gemini Pro and Ultra subscribers in the United States before coming to a wider, global audience. Gemini 3's advances include a new AI "thinking" feature within Google's search engine that company executives believe will become an indispensable tool that will help make people more productive and creative. "We like to think this will help anyone bring any idea to life," Koray Kavukcuoglu, a Google executive overseeing Gemini's technology, told reporters. As AI models have become increasingly sophisticated, the advances have raised worries that the technology is more prone to behave in ways that jumble people's feelings and thoughts while feeding them misleading information and fawning flattery. In some of the most egregious interactions, AI chatbots have faced accusations of becoming suicide coaches for emotionally vulnerable teenagers. The various problems have spurred a flurry of negligence lawsuits against the makers of AI chatbots, although none have targeted Gemini yet. Google executives believe they have built in guardrails that will prevent Gemini 3 from hallucinating or be deployed for sinister purposes such as hacking into websites and computing devices. Gemini 3 's responses are designed to be "smart, concise and direct, trading cliche and flatter for insight -- telling you what you need to hear, not just what you want to hear. It acts as a true thought partner," Kavukcuoglu and Demis Hassabis, CEO of Google's DeepMind division, wrote in a blog post. Besides providing consumers with more AI tools, Gemini 3 is also likely to be scrutinized as a barometer that investors may use to get a better sense about whether the massive torrent of spending on the technology will pay off. After starting the year expecting to spend $75 billion, Google's corporate parent Alphabet recently raised its capital expenditure budget from $91 billion to $93 billion, with most of the money earmarked for AI. Other Big Tech powerhouses such as Microsoft, Amazon and Facebook parent Meta Platforms are spending nearly as much -- or even more -- on their AI initiatives this year. Investors so far have been mostly enthusiastic about the AI spending and the breakthroughs they have spawned, helping propel the values of Alphabet and its peers to new highs. Alphabet's market value is now hovering around $3.4 trillion, more than doubling in value since the initial version of Gemini came out in late 2023. Alphabet's shares edged up slightly Tuesday after the Gemni 3 news came out. But the sky-high values also have amplified fears of a potential investment bubble that will eventually burst and drag down the entire stock market. For now, AI technology is speeding ahead. OpenAI released its fifth generation of the AI technology powering ChatGPT in August, around the same time the next version of Claude came out from Anthropic. Like Gemini, both ChatGPT and Claude are capable of responding rapidly to conversational questions involving complex topics -- a skill that has turned them into the equivalent of "answer engines" that could lessen people's dependence on Google search. Google quickly countered that threat by implanting Gemini's technology into its search engine to begin creating detailed summaries called "AI Overviews" in 2023, and then introducing an even more conversational search tool called "AI mode" earlier this year. Those innovations have prompted Google to de-emphasize the rankings of relevant websites in its search results -- a shift that online publishers have complained is diminishing the visitor traffic that helps them finance their operations through digital ad sales. The changes have been mostly successful for Google so far, with AI Overviews now being used by more than 2 billion people every month, according to the company. The Gemini app, by comparison, has about 650 million monthly users. With the release of Gemini 3, the AI mode in Google's search engine is also adding a new feature that will allow users to click on a "thinking" option in a tab that company executives promise will deliver even more in-depth answers than has been happening so far. Although the "thinking" choice in the search engine's AI mode initially will only be offered to Gemini Pro and Ultra subscribers, the Mountain View, California, company plans to eventually make it available to all comers.
[36]
Google released Gemini 3 Pro with Gemini Agent
Google announced Gemini 3 Pro, its most intelligent AI model, a few weeks before the first anniversary of Gemini 2. The model delivers state-of-the-art reasoning and class-leading coding performance. It becomes available immediately in the Gemini app, Google Search's AI Mode, and developer platforms. Gemini 3 Pro sets a new benchmark record on Humanity's Last Exam, a test recognized as one of the most challenging evaluations for AI systems. The model achieves 37.5 percent accuracy, surpassing the prior leader, Grok 4, by 12.1 percentage points. This result comes without reliance on external tools such as web search, demonstrating the model's standalone capabilities in handling complex, diverse questions across subjects. On the LMArena leaderboard, Gemini 3 Pro claims the top position with 1,501 points. This score reflects its performance across multiple evaluation categories, positioning it ahead of competing models in real-world user preference metrics compiled by the platform. Within the Gemini app, Gemini 3 Pro generates responses that are more concise and exhibit improved formatting. Users access the model by selecting "Thinking" from the model picker, making it available to all users. AI Plus, Pro, and Ultra subscribers benefit from higher rate limits, allowing extended usage before encountering restrictions. Gemini 3 Pro enables Gemini Agent, a feature that extends capabilities from Project Mariner, Google's web-surfing Chrome AI introduced at the end of the previous year. Gemini Agent performs specific tasks on behalf of users. For instance, when managing an email inbox, the agent executes the work directly rather than providing only general advice, such as organizing messages or drafting replies based on user instructions. To utilize Gemini Agent fully, users grant it access to their Google apps, integrating it seamlessly with services like Gmail and Calendar. This permission allows the agent to interact with personal data and complete actions within those environments. In Google Search's AI Mode, Gemini 3 Pro launches first for AI Pro and Ultra subscribers through the "Thinking" dropdown selection. The model extends to AI Overviews, targeting the most difficult queries posed to the search engine. A new routing algorithm for AI Mode and AI Overviews deploys in coming weeks, directing complex questions to Gemini 3 Pro automatically based on query analysis. Gemini 3 Pro enhances AI Mode's content discovery by refining the fan-out search technique. This method expands queries into multiple parallel searches. The model's advanced intelligence conducts additional searches and identifies relevant, credible content that earlier models overlooked, improving result comprehensiveness. The model's multi-modal understanding supports dynamic interfaces in AI Mode responses. For mortgage loan research, it generates an interactive loan calculator embedded in the output, enabling users to input variables like interest rates, loan amounts, and terms to compute payments and amortization schedules directly within the search interface. Developers and enterprise customers access Gemini 3 Pro via the Gemini API, AI Studio, and Vertex AI. These platforms provide APIs for integration, prompt testing in AI Studio, and scalable deployment through Vertex AI for production workloads. Google released Antigravity, a new agentic coding application. Antigravity programs autonomously, generates its own tasks to advance projects, and delivers progress reports to users. It handles full development cycles, from initial code generation to iterative improvements and status updates. Alongside Gemini 3 Pro, Google introduced Gemini 3 Deep Think, an enhanced reasoning mode. Initial availability goes to safety testers, followed by rollout to AI Ultra subscribers. This mode amplifies reasoning depth for particularly demanding tasks. Gemini 3 Pro's deployment spans consumer and professional applications. In the Gemini app, its concise formatting streamlines information presentation, reducing verbosity while maintaining completeness. The integration with Gemini Agent shifts from advisory responses to proactive execution, handling real-world workflows like email triage through direct API calls to Google services. AI Mode's fan-out augmentation with Gemini 3 Pro processes queries with greater precision, executing broader search variants to surface authoritative sources. The dynamic interfaces leverage the model's ability to interpret text, images, and data jointly, producing tools like financial calculators that respond to user inputs in real time. Antigravity's autonomous task creation allows it to break down coding projects into subtasks, such as designing algorithms, implementing functions, and debugging, while reporting milestones like completed modules or test coverage percentages. Gemini 3 Deep Think extends reasoning chains, applying iterative thinking to refine solutions before final output.
[37]
Gemini 3 Finally Arrives for Smart Devices along with New App Tweaks - Phandroid
Google recently announced its latest update to the Gemini app, which finally brings over the much-awaited Gemini 3 model. Google says that version 3 is capable of better responses with enhanced formatting, in addition to improved performance in areas like code generation and multimodal understanding. As such, the update brings Generative Interfaces -- this includes a "visual layout" for immersive, magazine-style views (like a travel itinerary) and "dynamic view" for real-time, custom-coded interactive interfaces which can immediately adapt to a user's prompt. There's also support for an experimental Gemini Agent, a new tool that orchestrates and completes complex, multi-step tasks across Google apps, such as organizing an inbox or researching and preparing a car rental booking based on details in a user's email. The Gemini Agent is rolling out first to Google AI Ultra subscribers in the U.S. As for the Gemini app itself, Google has tweaked the UI a bit, which should make chats and finding created content a lot easier, as well as improved shopping results with integrated product listings, comparison tables, and prices directly from Google's Shopping Graph. Gemini 3 Pro is rolling out globally starting today and is accessible by selecting "Thinking" in the model selector.
[38]
Gemini App and AI Mode in Search Get New Features With Gemini 3 AI Model
* Google is adding automatic model selection to Search * AI Mode in Search can now generate visual layouts * Both the Gemini app and AI Mode now support dynamic views Google has now released the Gemini 3 Pro artificial intelligence (AI) model. Although it is available in preview, and the rollout across the globe might take a couple of days, the Mountain View-based tech giant has already started announcing its integration across its suite of products. Two of the company's products that are getting the model's capabilities are the Gemini app and AI Mode in Search. The tech is introducing a new capability called generative interfaces in both of these platforms, which will offer users a more visual and interactive way of finding information. New Features in Gemini App In a blog post, Josh Woodward, the Vice President of Google Labs, Gemini and AI Studio at Google, introduced new improvements in the Gemini app. While the company uses the phrase "app," these changes will be available across the Android and iOS app, as well as the Gemini web interface. The company says it is redesigning the Gemini platform with a clean and modern look. What that means is users will find it easer to start chats; find images, videos, and reports they have created via a new My Stuff folder; and enjoy shopping for products with the integration of Google's Shopping Graph. However, the biggest introduction is generative interfaces or generative UI. In a separate blog post, the company describes this capability as an interface "which dynamically creates immersive visual experiences and interactive interfaces -- such as web pages, games, tools, and applications -- that are automatically designed and fully customized in response to any question, instruction, or prompt." The Gemini app is first releasing generative interfaces with two experiments. First is visual layout. It is a magazine-style view that features photos and modules. So, if a user prompts the chatbot for a itinerary for a trip, Gemini might show a carousel of different types of trips that users can tap on to select, a slider might let them select how many days they will be on a vacation, and another bar might let them click on different point-of-interests. The selection will then allow the AI to curate the right plan, without them having to spell it out by either typing or speaking. Second is dynamic view. Using Gemini 3's agentic coding capabilities, the chatbot can design and code a custom user interface in real-time in response to a user prompt. For instance, if a user asks Gemini to "explain the Van Gogh Gallery with life context for each piece," it will generate an interactive window where users can click on elements, scroll, and slide to learn about the topic visually. Both of these features are rolling out now, although the company says users will initially only see one of them to help the company compare between the two. Finally, the company is also introducing Gemini Agent. It is an experimental feature that can perform multi-step tasks within the app. However, it can connect to Google apps to manage the user's Calendar, add reminders, and even organise their inbox by drafting replies to emails. It can also perform web-based tasks such as making bookings or taking appointments. It is currently available on the web for Google AI Ultra subscribers in the US. New Features in AI Mode AI Mode in Search is also getting a few new capabilities with the integration of Gemini 3 Pro. However, these are first rolling out to the Google AI Pro and AI Ultra subscribers in the US. They can now select "Thinking" from the model dropdown menu to access the latest AI model. Currently, the rate limit with Gemini 3 Pro will be limited, but the company states that it will be increased soon. So, what's new in AI Mode? Due to improved reasoning capability, AI Mode can now tackle more complex queries and intelligently sift through a large number of web pages to find the contextually relevant responses. A new automatic model selection tool is also being added to Search, which will route user's more challenging questions in AI Mode and AI Overviews directly to Gemini 3 Pro. For simpler questions, it will continue to use the faster models. Generative interfaces are also making their debut in AI Mode. Google says this will allow the AI tool to create visual layouts for responses in real-time, complete with interactive tools and simulations based on user queries. "When the model detects that an interactive tool will help you better understand the topic, it uses its generative capabilities to code a custom simulation or tool in real-time and adds it into your response," the post explained. Highlighting an example, the tech giant said if a user is researching mortgage loans, Gemini 3 in AI Mode can create a custom interactive loan calculator directly in the interface to help them compare different options.
[39]
Google Search with Gemini 3: Our most intelligent search yet
Today, we introduced Gemini 3, our most intelligent model with state-of-the-art reasoning, deep multimodal understanding and powerful agentic capabilities. It's now available in Google Search, starting with AI Mode -- marking the first time we've brought a Gemini model to Search on day one. Gemini 3 brings incredible reasoning power to Search because it's built to grasp unprecedented depth and nuance for your hardest questions. It also unlocks new generative UI experiences so you can get dynamic visual layouts with interactive tools and simulations -- generated specifically for you. Here's how Gemini 3 is supercharging Search. Starting today, Google AI Pro and Ultra subscribers in the U.S. can use Gemini 3 Pro, our first model in the Gemini 3 family of models, by selecting "Thinking" from the model drop-down menu in AI Mode. With Gemini 3, you can tackle your toughest questions and learn more interactively because it better understands the intent and nuance of your request. And soon, we'll bring Gemini 3 in AI Mode to everyone in the U.S. with higher limits for users with the Google AI Pro and Ultra plans.
[40]
Google launches Gemini 3, its advanced reasoning model yet
Gemini, which the company launched first two years ago, now has 650 million monthly active users and 13 million developers are building on it, Google and Alphabet chief executive Sundar Pichai said. Google on Tuesday launched its most advanced reasoning model, Gemini 3, which will be available globally across its products including Search, AI Studio and Gemini App. The tech giant also launched an agentic development platform, Google Antigravity. Google termed the Gemini 3 launch as a big step towards artificial general intelligence (AGI). Gemini, which the company launched first two years ago, now has 650 million monthly active users and 13 million developers are building on it, Google and Alphabet chief executive Sundar Pichai said. Gemini 3 will be available across products, with more complex reasoning capabilities and across all countries, including India. Gemini Pro and Ultra users will be able to access with higher limits. With its latest partnership with Reliance Jio, users will be able to access Pro features. Google is also seeing increasing traction for its products such as Nano Banana in India. Chris Struhar, vice-president, product, Gemini App, said with partnership with Jio and free access to Gemini suite for students, the firm is seeing how students are using Gemini for homework. Gemini 3 is Google's most intelligent model for multimodal understanding, agentic capabilities and vibe coding, Koray Kavukcuoglu, chief technology officer and chief AI architect at Google DeepMind, said at a global media briefing held virtually. Gemini 3 outperforms its predecessor across AI benchmarks. The model showcases PhD-level reasoning with a 37.5% on Humanity's Last Exam, a benchmark that evaluates models across mathematics, humanities and natural sciences. The latest model also outperforms Gemini 2.5 Pro when it comes to coding. Users can utilise Gemini 3 for coding in Google AI Studio, Vertex AI, and its agentic development platform Antigravity. Gemini 3 will also be available on Cursor and other coding platforms. With the launch of Antigravity, Google is getting into Integrated Development Environment. IDE for short, it is an application that helps developers with writing and managing code efficiently, like Cursor. In response to ET's question about whether Google will be competing with platforms such as Cursor, Kavukcuoglu said the firm would not look at it that way since it is also partnering with Cursor and others very closely in the market. "It's important for us to reach and connect with the users where they are. It is early days in AI development and (it is unclear) how AI impacts different areas and different industries. I think this is important for us to be able to experiment as well. I am sure there will be others who are experimenting, and each product will sort of evolve too," he said. He said Google will continue its partnership with Cursor.
[41]
Google Unleashes Gemini 3 Pro: The New Benchmark for AI Intelligence
After over seven months of anticipation, Google has officially released its state-of-the-art Gemini 3 Pro AI model. According to Google, Gemini 3 represents "a suite of highly-capable, natively multimodal, reasoning models." With the release of Gemini 3, Google claims it's"taking another big step on the path toward AGI". Talking about the architecture, Gemini 3 Pro is a sparse mixture-of-expert (MoE) model, built on the Transformer architecture. On top of that, Google says Gemini 3 Pro was trained solely on Google's TPUs, which is impressive. Now, coming to benchmarks, Gemini 3 Pro has hit it out of the park. In the challenging Humanity's Last Exam, Gemini 3 Pro achieved 37.5% without any tool use. It even outclassed OpenAI's latest GPT-5.1 model which scored 26.5%. In LMArena, Gemini 3 Pro has taken the first spot with an ELO score of 1,501 points. Next, in the new ARC-AGI-2 benchmark, Gemini 3 Pro got 31.1%, again beating GPT-5.1 which received 17.6% only. In the SWE-Bench Verified, Gemini 3 Pro got 76.2%, nearly matching GPT-5.1's 76.3%. However, in this benchmark, Anthropic's Claude Sonnet 4.5 continues to lead with 77.2%. Google is also working to bring Gemini 3 Deep Think to Google AI Ultra subscribers which scored 41% on Humanity's Last Exam and 45.1% in ARC-AGI-2. In terms of agentic coding, Gemini 3 Pro is the leader in WebDev Arena with 1,487 ELO score. It can do long-horizon, high-level planning to perform multi-step, real-world tasks. Gemini Agent is coming to Google AI Ultra subscribers. Google also introduced a new Antigravity dev platform which is basically an agent-first development environment. It bundles Gemini 3 Pro, Gemini 2.5 Computer Use model, and Nano Banana image generation model. Agents can directly control the editor, terminal, and the browser to plan tasks and execute code. Gemini 3 Pro is rolling out in the Gemini app for everyone, starting today. Pro and Ultra subscribers can use the new model in AI Mode in Google Search.
[42]
Sundar Pichai Introduces Gemini 3 As Google's 'Most Intelligent' AI Model: 'Get What You Need With Less Prompting' - Alphabet (NASDAQ:GOOG), Amazon.com (NASDAQ:AMZN)
On Tuesday, Alphabet Inc. (NASDAQ:GOOG) (NASDAQ:GOOGL) CEO Sundar Pichai unveiled Google Gemini 3, calling it the company's most capable and nuanced AI system so far, as the search giant accelerates its competition with OpenAI's GPT-5. Google Calls Gemini 3 Its Most Advanced AI Model Yet Google introduced Gemini 3 as a major leap forward in its multimodal and agentic capabilities, positioning the model as a smarter and more context-aware successor to Gemini 2.5. "Gemini 3 is our most intelligent model that helps you bring any idea to life," the company said in a blog post. In a post on X, formerly Twitter, Pichai said the upgraded system is designed to help users "get what you need with less prompting" by understanding intent more accurately and handling complex tasks with greater depth. The model begins rolling out on Tuesday to select paid subscribers through the Gemini app, AI Mode in Search and enterprise tools, with broader availability expected in the coming weeks. See Also: Jeff Bezos Was Always Confident That The iPad Was No 'Kindle Killer' And He's Still Turning The Page On Apple: 'You Don't Understand My Audience' Aiming To Reinvent Search With More Visual, Interactive Answers Google said Gemini 3 will power new generative interfaces capable of producing magazine-style explanations, interactive calculators and dynamic layouts featuring images, tables and grids. In a demonstration, Google showed the model explaining Van Gogh's works with contextual visuals and narrative summaries. In the blog post, Demis Hassabis, CEO of Google DeepMind, said Gemini 3 is designed to replace "cliché and flattery" with more honest and insightful responses. A Direct Challenge To OpenAI As Big Tech Ramps Up AI Spending The launch comes as OpenAI continues updating GPT-5. This month, the AI startup has released two improved versions described as "warmer," more capable and better at following instructions. Both companies are pushing aggressively to stay ahead as demand for AI accelerates. Alphabet and other tech giants -- including Microsoft Corp (NASDAQ:MSFT), Meta Platforms, Inc. (NASDAQ:META) and Amazon.com, Inc. (NASDAQ:AMZN) -- expect to collectively reach about $600 billion this year on capital spending. Google Expands Developer Tools With Antigravity Google also unveiled Antigravity, a new agent platform that lets developers build at a higher, task-oriented level. Businesses can integrate Gemini 3 through Vertex AI, where the model can generate onboarding materials, analyze videos and factory images, or support procurement workflows. The Gemini app now has 650 million monthly active users and Google's AI Overviews reaches two billion monthly users. Benzinga's Edge Stock Rankings show GOOGL sustaining a solid growth trend across short, medium and long-term time frames. Click here for a deeper look at how it stacks up against peers and competitors. Read Next: Jim Cramer Says Trump Has Not 'Banned' Nvidia From China: President's Comments Leave A 'Lot Of Latitude' For The Tech Giant And Beijing Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors. Photo Courtesy: Photo Agency on Shutterstock.com AMZNAmazon.com Inc$222.820.12%OverviewGOOGAlphabet Inc$284.70-0.09%GOOGLAlphabet Inc$284.28-%METAMeta Platforms Inc$594.90-0.47%MSFTMicrosoft Corp$492.52-0.26%Market News and Data brought to you by Benzinga APIs
[43]
Google's Gemini 3 AI Models Are Finally Here With These New Features
* Gemini 3 Pro tops the LMArena leaderboard with 1501 Elo * The AI models feature improved frontend coding capability * Google is integrating Gemini 3 in Search's AI Mode Google finally released the Gemini 3 family of artificial intelligence (AI) models on Tuesday. The Mountain View-based tech giant called it the company's most intelligent AI model yet, highlighting that it outperforms its predecessor as well as OpenAI's GPT-5.1 in every single major benchmark. The new AI model brings improvements across various aspects, including reasoning, conversations, coding, mathematics, as well as agentic capabilities. The company highlights that Gemini 3 is a major step forward towards creating complex agentic experiences for users.
[44]
A new era of intelligence with Gemini 3
Nearly two years ago we kicked off the Gemini era, one of our biggest scientific and product endeavors ever undertaken as a company. Since then, it's been incredible to see how much people love it. AI Overviews now have 2 billion users every month. The Gemini app surpasses 650 million users per month, more than 70% of our Cloud customers use our AI, 13 million developers have built with our generative models, and that is just a snippet of the impact we're seeing. And we're able to get advanced capabilities to the world faster than ever, thanks to our differentiated full stack approach to AI innovation -- from our leading infrastructure to our world-class research and models and tooling, to products that reach billions of people around the world. Every generation of Gemini has built on the last, enabling you to do more. Gemini 1's breakthroughs in native multimodality and long context window expanded the kinds of information that could be processed -- and how much of it. Gemini 2 laid the foundation for agentic capabilities and pushed the frontiers on reasoning and thinking, helping with more complex tasks and ideas, leading to Gemini 2.5 Pro topping LMArena for over six months. And now we're introducing Gemini 3, our most intelligent model, that combines all of Gemini's capabilities together so you can bring any idea to life. It's state-of-the-art in reasoning, built to grasp depth and nuance -- whether it's perceiving the subtle clues in a creative idea, or peeling apart the overlapping layers of a difficult problem. Gemini 3 is also much better at figuring out the context and intent behind your request, so you get what you need with less prompting. It's amazing to think that in just two years, AI has evolved from simply reading text and images to reading the room. And starting today, we're shipping Gemini at the scale of Google. That includes Gemini 3 in AI Mode in Search with more complex reasoning and new dynamic experiences. This is the first time we are shipping Gemini in Search on day one. Gemini 3 is also coming today to the Gemini app, to developers in AI Studio and Vertex AI, and in our new agentic development platform, Google Antigravity -- more below. Like the generations before it, Gemini 3 is once again advancing the state of the art. In this new chapter, we'll continue to push the frontiers of intelligence, agents, and personalization to make AI truly helpful for everyone. We hope you like Gemini 3, we'll keep improving it, and look forward to seeing what you build with it. Much more to come!
[45]
Google launches Gemini 3, embeds AI model into search immediately
Google has introduced Gemini 3, its latest artificial intelligence innovation, which is set to revolutionize products like Google Search from day one. This strategic launch is designed to enhance revenue streams and reinforce Google's dominance in the rapidly evolving AI landscape. New functionalities such as Gemini Agent and Antigravity promise greater efficiency and versatility for users and enterprises alike. Alphabet's Google on Tuesday launched the latest version of its artificial intelligence model Gemini, emphasizing that the new capabilities will be immediately available in several profit-generating products like its search engine. Gemini 3, arriving 11 months after the second generation of the model, appears on paper to keep Google at the forefront of the AI race. During a press briefing, executives highlighted Gemini 3's lead position on several popular industry leaderboards that measure AI model performance. CEO Sundar Pichai described it as "our most intelligent model," in a company blog post. However, the AI race has increasingly shifted away from benchmarks to money-making applications of the technology, as Wall Street watches for signs of an AI bubble. Alphabet's stock has so far been buoyed this year largely due to the financial success from AI offerings from its cloud computing division. But even with leading developers like Google, OpenAI and Anthropic behind them, new AI model updates have had trouble distinguishing themselves, only attracting attention when they fail, as Meta experienced earlier this year. Google emphasized that Gemini 3, unlike past releases, was already underpinning a handful of revenue-generating consumer and enterprise products at launch. "We think Gemini has set quite a new pace in terms of both releasing the models, but also getting it to people faster than ever before," Koray Kavukcuoglu, Google's chief AI architect, told reporters during the briefing. Pichai said the Gemini 3 launch marked the first time that Google had incorporated its new model into its search engine from day one. In the past, new versions of Gemini took weeks or months to embed into Google's most highly used products. Paying users of Google's premium AI subscription plan will have access to Gemini 3 capabilities in AI Mode, a search feature that dispenses with the web's standard fare in favor of computer-generated answers for complicated queries. NEW FEATURES Improvements to Gemini 3 in domains such as coding and reasoning enabled Google to build out a set of new features, both for consumers and enterprise customers. The company debuted "Gemini Agent," a feature that can complete multi-step tasks, such as organizing a user's inbox or booking travel arrangements. The tool brings Google closer to its AI chief Demis Hassabis' vision for a "universal assistant" that has been referred to internally as AlphaAssist, as Reuters previously reported. Google also redesigned the Gemini app to return answers reminiscent of a full-fledged website, a further blow to content publishers who rely on web traffic to generate revenue. Josh Woodward, the vice president in charge of the app, demonstrated to reporters how Gemini can now respond to a query like "create a Van Gogh gallery with life context for each piece" by generating an on-demand interface with visual and interactive elements. For business customers, Google previewed a new product called Antigravity, a new software development platform where AI agents can plan and execute coding tasks on their own. (You can now subscribe to our Economic Times WhatsApp channel)
[46]
Google rolls out Gemini 3 with Deep Think mode, enhanced coding and agentic actions
Google has introduced Gemini 3, the next generation of its AI model series, bringing upgrades in long-form reasoning, multimodal interpretation, interface generation, developer tools, and agent-based task execution. The rollout includes Gemini 3 Pro, a new Deep Think mode, expanded multimodal learning workflows, redesigned app capabilities, and the first release of Gemini Agent for multi-step automation. Google and Alphabet CEO Sundar Pichai shared a brief note highlighting the progress of the Gemini program, which began nearly two years ago. He said Gemini has grown into one of Google's largest scientific and product efforts, supported by an integrated full-stack approach combining infrastructure, research, models, and products. Key points he highlighted Pichai said Gemini 3 brings together advancements from earlier versions to deliver deeper reasoning, better intent understanding, and more accurate multi-step interpretation with fewer instructions. He confirmed that Gemini 3 is rolling out across Google's ecosystem -- AI Mode in Search, the Gemini app, AI Studio, Vertex AI, and Google Antigravity -- marking the first time a Gemini release is launching inside Search on day one. He added that Gemini 3 represents the next phase of Google's AI roadmap, with continued focus on intelligence, agentic systems, and personalization in future releases. Google says Gemini 3 Pro delivers stronger reasoning, clearer responses, and better multimodal grounding. Benchmark results include: The model is optimized to avoid generic phrasing, offer more direct answers, and maintain accuracy across text, audio, images, video, and code. Deep Think extends step-by-step reasoning with stronger analytical performance. Deep Think is undergoing extended safety reviews before wider rollout. Gemini 3 supports expanded learning tasks through multimodal understanding and a 1M-token context window, enabling: Gemini 3 improves instruction adherence, zero-shot coding, and agentic coding. Antigravity offers a development environment where Gemini 3 can: This enables parallel, consistent end-to-end software workflows. Google also notes that Gemini 3 delivers its best-ever vibe coding performance inside Canvas, enabling more feature-rich app generation within the workspace. These capabilities are part of the revamped Gemini app, which now includes a "Thinking" model selector and a My Stuff library for saved outputs. Gemini Agent, built using insights from Project Mariner, can break down and execute complex tasks. It can: The agent confirms sensitive actions such as purchases or message sending. Long-horizon planning Gemini 3 strengthens multi-step planning and avoids tool-use drift during long tasks. It leads Vending-Bench 2, designed to test year-long operational decision-making. Gemini 3 includes stronger protections through extensive internal and external assessments. Google says additional Gemini 3 series models will be released soon, and the team says it looks forward to user feedback during the rollout.
[47]
Gemini 3 release imminent - here's what to expect from Google's latest release
Gemini 3, Google's new AI model, is quietly rolling out. Early users report impressive performance, exceeding expectations. Businesses like Equifax are seeing significant productivity gains. This suggests Google is focusing on quality over hype. Gemini 3 is expected to compete strongly with rivals. Google is also preparing other AI models for release soon. Google's Gemini 3 launch is apparently imminent. Google's next big AI leap is unfolding quietly, almost cautiously. While the tech world has spent months focusing on Gemini controversies, something very different has been happening in the background. Gemini 3 has begun surfacing in real-world environments without fanfare and early users say the silent rollout speaks louder than any marketing campaign. ALSO READ: Co-Founder fires employees over affair; the move sparks integrity debate in the workplace For months, Google's Gemini ecosystem has been overshadowed by controversies, privacy lawsuits, image-generation misfires, and API changes that left developers frustrated. Critics accused Google of prioritising speed over reliability and racing against OpenAI without proper testing. Yet in November 2025, something unexpected happened. Google rolled out Gemini 3 quietly, without a keynote, launch video, or even a blog post. No hype -- just performance, as per a report by Aim Media House. Gemini 3 first appeared subtly inside Canvas on mobile, where users noticed that the tool began producing higher-quality results than usual. Soon after, comparisons between Canvas mobile (believed to be running Gemini 3) and Canvas desktop (running Gemini 2.5 Pro) started circulating in forums and developer communities. And the reaction was consistent: the model felt dramatically more capable. One Reddit user captured the sentiment bluntly: "Everything here is real and backed up by evidence. This isn't hype." For a community accustomed to disappointment and overpromising, that line stood out. ALSO READ: What does 67 mean, who made the 67 meme and why is it so popular? Across Reddit threads and technical communities, users noticed that traditionally complex tasks were suddenly achievable in a single shot. Testing showed Gemini 3 handling work that usually required several rounds of refinement. This included: * Smooth, functional SVG animations * Clean web designs generated on the first attempt * Accurate 3D physics simulations, complete with gravity and momentum * Touch-interaction logic built correctly without extra prompting ALSO READ: New poll delivers big blow to Trump as approval rating takes sharp dive Developers who had struggled with Gemini 2.5 Pro's inconsistency were the first to flag the dramatic improvement. Without any announcement, Gemini 3's quality did the talking. Beyond enterprise use, benchmark results reflect Gemini 3's maturity. The model achieved gold-medal level performance at the 2025 International Collegiate Programming Contest, a respected test of algorithmic reasoning. On the "Humanity's Last Exam" benchmark, it scored 18.8%, outpacing previous generations. On WebVoyager, a benchmark for real-world web task performance, Gemini reached 83.5% accuracy. These aren't record-breaking scores, but they show consistency where earlier models faltered. Gemini is no longer lagging in reasoning and task completion, it now competes at parity or better, as quoted in a report by Aim Media House. Google's whisper-soft release strategy has sparked its own debate. Some argue the company is avoiding scrutiny. Others believe the silence reflects a shift in culture, one focused on stability over showmanship. This time, the evidence supports the latter. A company that rushes out unfinished products doesn't secure 97% enterprise retention. A model with quality problems doesn't land 83.5% on WebVoyager. And a firm ignoring developer feedback doesn't start testing in small, controlled channels like mobile Canvas or AI Studio. ALSO READ: Trump got a priceless gold clock from the Rolex CEO, and then he cut Switzerland's tariffs to 15% The privacy lawsuit involving default Gemini activation in Gmail still lingers. The image-generation failures and developer frustrations were real. But the November rollout hints at a recalibration, a recognition that enterprise users value reliability more than spectacle, as quoted in a report by Aim Media House. The pattern suggests a shift inside the organisation. For years, Google was accused of overhyping its AI work. Now, its strategy feels quieter, more cautious, more deliberate. Instead of racing to win headlines, Google seems intent on earning back trust through output quality. The company's biggest advantage remains its ecosystem. With 44% market share in productivity suites, Gemini's native integration into Gmail, YouTube, Android, Search, and Chrome gives it pathways no competitor can replicate. In the enterprise world, the most useful model usually beats the most advanced one. Gemini 3's design appears aligned with that reality. It isn't trying to be the flashiest model -- it's trying to be the most functional across Google's products. Google appears to be preparing the public for Gemini 3's official release. The model briefly appeared inside AI Studio, the development environment where students, researchers, and engineers test Gemini models. Reports noted that its presence in AI Studio suggests a rollout is close, possibly hours or days away. AI Studio offers controls such as context length and temperature, making it an ideal environment for quiet phased deployment. Even though Gemini 2.5 Pro still appears as the top model in the interface, users spotted subtle adjustments hinting at the incoming update, as per a report by Techzine. One notable line referenced a tuning detail: Gemini 3 performs best at a temperature of 1.0, while lower settings reduce reasoning quality. That indicates deliberate optimisation for complex tasks. More signs emerged on the Vertex AI cloud platform. A variant listed as Gemini-3-pro-preview-11-2025 was spotted, suggesting Google has multiple configurations under internal testing. These surfacing references point to a coordinated rollout across developer infrastructure, as per a report by Techzine. Your provided information outlined Gemini 3's biggest technical strengths: 1. Advanced Coding Abilities Gemini 3 can recreate entire operating systems -- like MacOS and Windows -- inside a browser. This could reshape software development workflows. 2. Problem-Solving Expertise It has solved mathematical problems previously considered unsolvable, showcasing precision in complex computations. 3. Creative Range It generates: * Functional games * Detailed 3D visualisations * Artistic designs The ECPT variant emerged as the most powerful during internal testing, balancing technical and creative strengths, as per a report by Techzine. According to the reports, Gemini 3 Pro will release on November 18, 2025. Select users have limited early access through AI Studio, making this a phased rollout rather than a single launch moment, as per a report. When will Gemini 3 fully launch? Gemini 3 Pro is scheduled for release on November 18, 2025, with limited early access already appearing in AI Studio. What makes Gemini 3 significant? Its real-world performance, enterprise adoption, coding capability, and improved reasoning place it ahead of earlier Gemini versions, and closer to its main competitors. (You can now subscribe to our Economic Times WhatsApp channel)
[48]
Google unveils Gemini 3, its most ambitious AI model yet, in response to OpenAI
Alphabet, Google's parent company, launched Gemini 3, its new generation of artificial intelligence model, on Tuesday, aiming to compete head-on with OpenAI in the field of generative AI. Dubbed the most advanced model developed by Alphabet, Gemini 3 stands out for its ability to provide more accurate and nuanced responses, while requiring fewer instructions. CEO Sundar Pichai hailed an AI that is "useful rather than flattering," capable of better understanding the user's intentions. The model will be gradually integrated into Google's AI search products, the Gemini app, which has 650 million monthly users, and professional services via Vertex AI and the Gemini API. Google also unveiled Antigravity, a new task-oriented development platform designed for natural language assisted coding. According to Google Labs, Gemini 3 offers a "vibe coding" experience, allowing users to generate interfaces, visualizations, and interactive explanations, similar to a "digital magazine." The tool targets both developers and businesses, offering a variety of uses: creating interactive simulators, analyzing industrial videos, or generating HR content such as onboarding modules. The launch comes at a time of rapid acceleration in investment in artificial intelligence. The digital giants Alphabet, Microsoft, Amazon, and Meta are expected to collectively spend over $380bn this year on their AI infrastructures. Gemini 3 is thus positioned as a direct response to recent developments in GPT-5 at OpenAI, while signaling Google's desire to make its models more assertive and less consensual. With Gemini 3, Google intends to reestablish its technological leadership and spread AI "across its entire infrastructure," Sundar Pichai says.
[49]
Google Gemini 3 launch gets attention from Elon Musk and Sam Altman: Here is what they said
According to Google CEO, Gemini 3 is the company's "most intelligent model." Google has introduced Gemini 3, the latest version of its artificial intelligence model, nearly two years after revealing the first Gemini system. That original version was built in response to the rapid rise of OpenAI's ChatGPT, which kicked off a major shift in the tech world toward advanced AI tools. To mark the new release, Google CEO Sundar Pichai posted a short but excited message on X. His one-word post simply read, "Geminiii," showing his enthusiasm for the company's newest AI model. In a blog post, Pichai also called Gemini 3 Google's "most intelligent model." The launch quickly drew reactions from two of Pichai's biggest rivals in the AI race: Elon Musk, who runs xAI, and Sam Altman, CEO of OpenAI. Musk replied to Pichai's post with a brief message: "Congrats," notably without any emojis. Altman also posted on X, "Congrats to Google on Gemini 3! Looks like a great model." Well, these statements highlight that even as the tech giants compete fiercely, they also recognise each other's breakthroughs as milestones for the entire industry. Also read: Google launches Gemini 3 with big upgrades in reasoning, multimodal performance and agentic tools: All details According to Google, Gemini 3 brings major improvements in three main areas: reasoning, multimodal understanding and autonomous task execution. The model can be used across several Google services, including Search, the Gemini app, and developer tools. The first model in this new series is the Gemini 3 Pro. Google says it performs better than Gemini 2.5 Pro on important industry benchmarks. These include benchmarks like LMArena, Humanity's Last Exam, GPQA Diamond, and MathArena Apex. Google has also introduced Gemini 3 Deep Think, a version focused on tougher reasoning and complex problem-solving. The company plans to roll out this mode to Google AI Ultra subscribers after it completes more safety checks.
[50]
Google launches Gemini 3 with big upgrades in reasoning, multimodal performance and agentic tools: All details
The launch includes Google Antigravity, a new agent-first development platform built to automate complex coding and software tasks. After much anticipation, Google has finally announced the next version of its flagship artificial intelligence model, Gemini 3. According to Google's blog post, the new model improves reasoning, multimodal understanding, and autonomous task execution and is available for use across multiple Google products, including Search, the Gemini app, and developer platforms. The Gemini 3 Pro, which was released in preview, is the first model in the new series. Google claims it outperforms its predecessor, Gemini 2.5 Pro, in key industry benchmarks. The company reported high scores on LMArena, Humanity's Last Exam, GPQA Diamond, and MathArena Apex, as well as improved performance in multimodal assessments such as MMMU-Pro and Video-MMMU. The model also received a high accuracy rating from SimpleQA Verified. The company has also released a new version, Gemini 3 Deep Think, which is designed for more advanced reasoning tasks. Early testing by the company shows that it performs better on complex problem-solving benchmarks such as Humanity's Last Exam and ARC-AGI 2. Google stated that this mode will be made available to Google AI Ultra subscribers following additional safety checks. For the first time, Google is including its newest model in Search at launch. AI Mode in Search will now use Gemini 3 to create more dynamic layouts, interactive elements, and detailed visual explanations. The model is also accessible via the Gemini API, Google AI Studio, Vertex AI, the Gemini CLI, and a new agent-focused development environment called Google Antigravity. Antigravity allows developers to use AI agents to plan, write, and validate code within an integrated workspace. The platform includes Gemini 3 Pro, Google's most recent computer-use model for browser-based actions, as well as an updated image editing model. According to Google, Gemini 3 also improves long-term planning. The model received the highest score in the most recent Vending-Bench 2 test, which assesses an AI system's ability to maintain consistent decision-making over extended tasks. These capabilities are also being made available to end users via Gemini Agent within the Gemini app. Google also announced that additional models in the Gemini 3 lineup will be released in the coming weeks.
Share
Share
Copy Link
Google releases its most advanced AI model Gemini 3, achieving record benchmark scores and introducing Antigravity, an AI-first development environment. The launch comes amid intensifying AI competition and concerns about market bubbles.
Google has released Gemini 3, its most advanced artificial intelligence model to date, marking a significant milestone in the company's AI development journey. The new model is immediately available through the Gemini app and AI search interface, coming just seven months after the Gemini 2.5 release
1
2
.
Source: MacRumors
The model has achieved unprecedented benchmark scores, topping the LMArena leaderboard with an ELO score of 1,501, surpassing its predecessor Gemini 2.5 Pro by 50 points
1
. On the challenging Humanity's Last Exam benchmark, which tests PhD-level knowledge and reasoning, Gemini 3 set a new record with a score of 37.5 percent, significantly outperforming GPT-5 Pro's previous high of 31.642
.Google has positioned Gemini 3 as a major step toward artificial general intelligence (AGI), emphasizing its expanded simulated reasoning abilities and improved understanding of text, images, and video
1
. Tulsee Doshi, Google's head of product for the Gemini model, noted the "massive jump in reasoning," describing responses with "a level of depth and nuance that we haven't seen before"2
.Factual accuracy, a persistent challenge for generative AI models, has seen substantial improvement. In the 1,000-question SimpleQA Verified test, Gemini 3 achieved a record 72.1 percent accuracy rate
1
. The model also demonstrates reduced sycophantic behavior and enhanced security against prompt injection attacks5
.Alongside Gemini 3, Google introduced Antigravity, an AI-first integrated development environment that represents a significant advancement in coding assistance technology
1
. The platform combines a ChatGPT-style prompt window with command-line interface and browser window, allowing the coding agent to demonstrate the impact of changes in real-time2
.
Source: Google Blog
DeepMind CTO Koray Kavukcuoglu explained that "the agent can work with your editor, across your terminal, across your browser to make sure that it helps you build that application in the best way possible"
2
. Google describes Antigravity as providing an "active partner" experience, autonomously planning and executing complex software tasks while validating its own code5
.Related Stories
Gemini 3 introduces sophisticated "vibe coding" capabilities, where users can describe end goals in plain language and let the model assemble the necessary interface or code
3
. The model can autonomously generate visual elements, sketching diagrams or creating simple animations when it determines a visual approach would be more effective than text3
.Josh Woodward, VP of Google Labs, Gemini, and AI Studio, highlighted the "immersive, magazine-style view complete with photos and modules" that the visual layout generates, noting that "these elements don't just look good but invite your input to further tailor the results"
3
.The release strengthens Google's position amid intensifying AI competition, with the company integrating Gemini 3 across its product portfolio. The model is now available in AI Mode in Search and, for Pro and Ultra subscribers, in AI Overviews, where it can generate interactive elements
5
. Google has observed "double-digit" increases in natural language queries and a 70 percent spike in visual search, both leveraging Gemini's capabilities4
.
Source: NDTV Gadgets 360
Demis Hassabis, CEO of Google DeepMind, emphasized the company's broad integration strategy, stating "We are the engine room of Google, and we're plugging in AI everywhere now"
4
. With over 650 million monthly active users of the Gemini app and 13 million software developers incorporating the model into their workflows, Google's AI ecosystem continues expanding2
.Summarized by
Navi
[3]
13 Nov 2025•Technology

14 Oct 2025•Technology

17 Dec 2024•Technology
