79 Sources
79 Sources
[1]
Google unveils Gemini 3 AI model and AI-first IDE called Antigravity
Google has kicked its Gemini rollout into high gear over the past year, releasing the much-improved Gemini 2.5 family and cramming various flavors of the model into Search, Gmail, and just about everything else the company makes. Now, Google's increasingly unavoidable AI is getting an upgrade. Gemini 3 Pro is available in a limited form today, featuring more immersive, visual outputs and fewer lies, Google says. The company also says Gemini 3 sets a new high-water mark for vibe coding, and Google is announcing a new AI-first integrated development environment (IDE) called Antigravity, which is also available today. Google says the release of Gemini 3 is yet another step toward artificial general intelligence (AGI). The new version of Google's flagship AI model has expanded simulated reasoning abilities and shows improved understanding of text, images, and video. So far, testers like it -- Google's latest LLM is once again atop the LMArena leaderboard with an ELO score of 1,501, besting Gemini 2.5 Pro by 50 points. Factuality has been a problem for all gen AI models, but Google says Gemini 3 is a big step in the right direction, and there are myriad benchmarks to tell the story. In the 1,000-question SimpleQA Verified test, Gemini 3 scored a record 72.1 percent. Yes, that means the state-of-the-art LLM still screws up almost 30 percent of general knowledge questions, but Google says this still shows substantial progress. On the much more difficult Humanity's Last Exam, which tests PhD-level knowledge and reasoning, Gemini set another record, scoring 37.5 percent without tool use. Math and coding are also a focus of Gemini 3. The model set new records in MathArena Apex (23.4 percent) and WebDev Arena (1487 ELO). In the SWE-bench Verified, which tests a model's ability to generate code, Gemini 3 hit an impressive 76.2 percent.
[2]
Google launches Gemini 3 with new coding app and record benchmark scores | TechCrunch
On Tuesday, Google released Gemini 3, its latest and most advanced foundation model, which is now immediately available through the Gemini app and AI search interface. Coming just seven months after the Gemini 2.5 release, the new model is Google's most capable LLM yet, and an immediate contender for the most capable AI tool on the market. The release also comes less than a week after OpenAI released GPT 5.1, and a mere two months after Anthropic released Sonnet 4.5 -- a reminder of the blistering pace of frontier model development. A more research-intensive version of the model, called Gemini 3 Deepthink, will also be made available to Google AI Ultra subscribers in the coming weeks, once it passes further rounds of safety testing. "With Gemini 3, we're seeing this massive jump in reasoning," said Tulsee Doshi, Google's head of product for the Gemini model. "It's responding with a level of depth and nuance that we haven't seen before." Some of that reasoning power is already registering on independent benchmarks. With a score of 37.4, the model marked the highest score on record on the Humanity's Last Exam benchmark, meant to capture general reasoning and expertise. The previous high score, held by GPT-5 Pro, was 31.64. Gemini 3 also topped the leaderboard on LMArena, a human-led benchmark that measures user satisfaction. According to Google, the Gemini app currently has more than 650 million monthly active users, and 13 million software developers have used the model as part of their workflow. Alongside the base model, Google also released a Gemini-powered coding interface called Google Antigravity, allowing for multi-pane agentic coding similar to agentic IDEs like Warp or Cursor 2.0. Specifically, Antigravity combines a ChatGPT-style prompt window with a command-line interface and a browser window that can show the impact of the changes made by the coding agent. "The agent can work with your editor, across your terminal, across your browser to make sure that it helps you build that application in the best way possible," said DeepMind CTO Koray Kavukcuoglu.
[3]
Google's new Gemini 3 vibe-codes its responses and comes with its own agent
When asked to explain a concept, Gemini 3 may sketch a diagram or generate a simple animation on its own if it believes a visual is more effective. "Visual layout generates an immersive, magazine-style view complete with photos and modules," says Josh Woodward, VP of Google Labs, Gemini, and AI Studio. "These elements don't just look good but invite your input to further tailor the results." With Gemini 3, Google is also introducing Gemini Agent, an experimental feature designed to handle multi-step tasks directly inside the app. The agent can connect to services such as Google Calendar, Gmail, and Reminders. Once granted access, it can execute tasks like organizing an inbox or managing schedules. Similar to other agents, it breaks tasks into discrete steps, displays its progress in real time, and pauses for approval from the user before continuing. Google describes the feature as a step toward "a true generalist agent." It will be available on the web for Google AI Ultra subscribers in the US starting November 18. The overall approach can seem a lot like "vibe coding," where users describe an end goal in plain language and let the model assemble the interface or code needed to get there. The update also ties Gemini more deeply into Google's existing products. In Search, a limited group of Google AI Pro and Ultra subscribers can now switch to Gemini 3 Pro, the reasoning variation of the new model, to receive deeper, more thorough AI-generated summaries that rely on the model's reasoning rather than the existing AI Mode.
[4]
Google's Gemini 3 model keeps the AI hype train going - for now
Google's latest model reportedly beats its rivals in several benchmark tests, but issues with reliability mean concerns remain over a possible AI bubble Google's latest chatbot, Gemini 3, has made significant leaps on a raft of benchmarks designed to measure AI progress, according to the company. These achievements may be enough to allay fears of an AI bubble bursting for the moment, but it is unclear how well these scores translate to real-world capabilities. What's more, persistent factual inaccuracies and hallucinations that have become a hallmark of all large language models show no signs of being ironed out, which could prove problematic for any uses where reliability is vital. In a blog post announcing the new model, Google bosses Sundar Pichai, Demis Hassabis and Koray Kavukcuoglu write that Gemini 3 has "PhD-level reasoning", a phrase that competitor OpenAI also used when it announced its GPT-5 model. As evidence for this, they list scores on several tests designed to test "graduate-level" knowledge, such as Humanity's Last Exam, a set of 2500 research-level questions from maths, science and the humanities. Gemini 3 scored 37.5 per cent on this test, outclassing the previous record holder, a version of OpenAI's GPT-5, which scored 26.5 per cent. Jumps like this can indicate that a model has become more capable in certain respects, says Luc Rocher at the University of Oxford, but we need to be careful about how we interpret these results. "If a model goes from 80 per cent to 90 per cent on a benchmark, what does it mean? Does it mean that a model was 80 per cent PhD level and now is 90 per cent PhD level? I think it's quite difficult to understand," they say. "There is no number that we can put on whether an AI model has reasoning, because this is a very subjective notion." Benchmark tests have many limitations, such as requiring a single answer or multiple choice answers for which models don't need to show their working. "It's very easy to use multiple choice questions to grade [the models]," says Rocher, "but if you go to a doctor, the doctor will not assess you with a multiple choice. If you ask a lawyer, a lawyer will not give you legal advice with multiple choice answers." There is also a risk that the answers to such tests were hoovered up in the training data of the AI models being tested, effectively letting them cheat. The real test for Gemini 3 and the most advanced AI models - and whether their performance will be enough to justify the trillions of dollars that companies like Google and OpenAI are spending on AI data centres - will be in how people use the model and how reliable they find it, says Rocher. Google says the model's improved capabilities will make it better at producing software, organising email and analysing documents. The firm also says it will improve Google search by supplementing AI-generated results with graphics and simulations. Initial reactions online have included people praising Gemini's coding capabilities and ability to reason, but as with all new model releases, there have also been posts highlighting failures to do apparently simple tasks, such as tracing hand-drawn arrows pointing to different people, or simple visual reasoning tests. Google admits, in Gemini 3's technical specifications, that the model will continue to hallucinate and produce factual inaccuracies some of the time, at a rate that is roughly comparable with other leading AI models. The lack of improvement in this area is a big concern, says Artur d'Avila Garcez at City St George's, University of London. "The problem is that all AI companies have been trying to reduce hallucinations for more than two years, but you only need one very bad hallucination to destroy trust in the system for good," he says.
[5]
Gemini 3 Is Here -- and Google Says It Will Make Search Smarter
Google has introduced Gemini 3, its smartest artificial intelligence model to date, with cutting-edge reasoning, multimedia, and coding skills. As talk of an AI bubble grows, the company is keen to stress that its latest release is more than just a clever model and chatbot -- it's a way of improving Google's existing products, including its lucrative search business, starting today. "We are the engine room of Google, and we're plugging in AI everywhere now," Demis Hassabis, CEO of Google DeepMind, an AI-focused subsidiary of Google's parent company, Alphabet, told WIRED in an interview ahead of the announcement. Hassabis admits that the AI market appears inflated, with a number of unproven startups receiving multibillion-dollar valuations. Google and other AI firms are also investing billions in building out new data centers to train and run AI models, sparking fears of a potential crash. But even if the AI bubble bursts, Hassabis thinks Google is insulated. The company is already using AI to enhance products like Google Maps, Gmail, and Search. "In the downside scenario, we will lean more on that," Hassabis says. "In the upside scenario, I think we've got the broadest portfolio and the most pioneering research." Google is also using AI to build popular new tools like NotebookLM, which can auto-generate podcasts from written materials, and AI Studio which can prototype applications with AI. It's even exploring embedding the technology into areas like gaming and robotics, which Hassabis says could pay huge dividends in years to come, regardless of what happens in the wider market. Google is making Gemini 3 available today through the Gemini app and in AI Overviews, a Google Search feature that synthesizes information alongside regular search results. In demos, the company showed that some Google queries, like a request for information about the three-body problem in physics, will prompt Gemini 3 to automatically generate a custom interactive visualization on the fly. Robby Stein, vice president of product for Google Search, said at a briefing ahead of the launch that the company has seen "double-digit" increases in queries phrased in natural language, which are most likely targeted at AI Overviews, year over year. The company has also seen a 70 percent spike in visual search, which relies on Gemini's ability to analyze photos. Despite investing heavily in AI and making key breakthroughs, including inventing the transformer model that powers most large language models, Google was shaken by the sudden rise of ChatGPT in 2022. The chatbot not only vaulted OpenAI to center stage when it came to AI research; it also challenged Google's core business by offering a new and potentially easier way to search the web.
[6]
Google Says New Gemini 3 AI Model Is Its Most Capable Yet
Imad is a senior reporter covering Google and internet culture. Hailing from Texas, Imad started his journalism career in 2013 and has amassed bylines with The New York Times, The Washington Post, ESPN, Tom's Guide and Wired, among others. Gemini 3, the latest AI model from Google, is the company's most intelligent model to date, with more advanced multimodal and vibe coding capabilities, the company said in a blog post on Tuesday. It's available now. Google says Gemini 3 is "built to grasp depth and nuance" and is better at understanding the intent behind a user's request. The company also touted Gemini 3's multimodal capabilities, such as its ability to turn a long video lecture into interactive flash cards or to analyze a person's pickleball match and find areas for improvement. Gemini 3 isn't limited to the app. It'll also be available in AI Mode in Search and, for Pro and Ultra subscribers, in AI Overviews. In AI Overviews, Gemini 3 can generate interactive elements. Google DeepMind's Demis Hassabis and Koray Kavukcuoglu said in a blog post that Gemini 3 Pro is less sycophantic, a problem that's been plaguing AIs and leading to AI psychosis in some. It's also more secure against prompt injection attacks, a type of attack in which bad actors try to make an AI ignore its original instructions and perform unintended actions. The company also unveiled Google Antigravity, a new agentic development platform. Google says Antigravity is like having an active partner while making tools or working on projects, autonomously planning and executing complex software tasks while validating its own code. It works in tandem with Gemini 2.5's Computer Use model for browser control and works with nano banana, Gemini 2.5's image model. Gemini 3's new, more powerful, agentic capabilities will only be available to $250/month Google AI Ultra subscribers at first. This will allow the Gemini Agent to do multi-step workflows, like planning a travel itinerary. Google's release of Gemini 3 comes as an AI war is heating up between it, OpenAI, Anthropic and xAI. Google's consistently been leading AI leaderboards, although other AI models haven't been far behind, sometimes trading spots at the top. With the release of Gemini 3, Google seems to be trying to solve for some of AI's more annoying problems, like hallucinations or sycophancy. It's also trying to prove that AIs can be truly agentic, being able to accomplish tasks on the user's behalf. Other agentic models have proven to be problematic in real-world usage and run into various security concerns, especially in web browsers. The latest AI release from Google also comes at a time when there are fears of an AI bubble forming in the stock market. AI companies, including Nvidia, Google, Meta and Microsoft, account for 30% of the S&P 500. Google currently has a valuation of $3.4 trillion. Even Google CEO Sundar Pichai says the trillion-dollar AI investment boom has "elements of irrationality" and that a burst would affect every AI company, in an interview with the BBC. Still, if Google is to keep its stock price moving upward, it needs to demonstrate that its AI models beat the competition.
[7]
Want to ditch ChatGPT? Gemini 3 shows early signs of winning the AI race
ChatGPT is still the most popular AI, but Gemini is catching up. Watch out, ChatGPT. Another AI is aiming to take over the top spot in the AI world. And that's Google's Gemini. The latest Gemini 3 flavor is already earning kudos for being faster, smarter, and more adept at specialized tasks, leading several tech execs to crown it as the new AI king. Released last week, Gemini 3 is being touted as not only smarter and faster than its predecessors and rival AIs but also better at coding and multimodality. With multimodality as part of its skillset, Gemini can more easily and effectively work with different types of content, such as text, images, audio, video, and computer code. Plus, the new AI model is more skilled with Ph.D.-level reasoning, helping it better solve complex problems in science, math, and other technical areas. Also: Want better Gemini responses? Try these 10 tricks, Google says With its superior skills, Gemini 3 Pro has already landed on LMArena Leaderboard with the top score in virtually every category, based on anonymous voting that pitted it against models from Anthropic, Meta, XAI, DeepSeek, and others. Google's latest AI also grabbed a 91% score on the GPQA Diamond, a benchmark that evaluates Ph.D.-level reasoning. Best of all, Gemini 3 is available to Google users across the Gemini website, the Gemini mobile app, Google Search, AI Studio, Vertex AI, and a new agentic development platform called Google Antigravity. The basic version of Gemini 3 is free to all users. Packed with more advanced features, the Pro flavor is accessible only to Google AI subscribers and certain programs. Those are notable credentials for an AI model that just debuted a week ago. Tech leadersd are also singing its praises, including Salesforce CEO Marc Benioff, former Tesla AI director Andrej Karpathy, and Stripe CEO Patrick Collison. As spotted by Business Insider, Benioff is promising to dump ChatGPT for Gemini. On Sunday, the Salesforce CEO proclaimed on X: "I've used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I'm not going back. The leap is insane -- reasoning, speed, images, video... everything is sharper and faster. It feels like the world just changed, again." In his X post, Karpathy wrote: "I had a positive early impression yesterday across personality, writing, vibe, coding, humor, etc., very solid daily driver potential, clearly a tier 1 LLM, congrats to the team!" Also: Google's Nano Banana image generator goes Pro - how it beats the original Collison said in his own post that he asked Gemini 3 to make an interactive web page summarizing 10 breakthroughs in genetics over the past 15 years. Here was the result, which he said he found pretty cool. Even OpenAI CEO Sam Altman chimed in last week, saying that Gemini 3 "looks like a great model." (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Based on use, ChatGPT is still the king of the AI castle. At the top of Apple's charts for most popular free iPhone apps, ChatGPT has more than 800 million weekly active users, double the 400 million count from February 2025, according to Altman and data reporting platform Demand Sage. But Gemini has been catching up. Google's AI is the second most popular free iPhone app. Gemini sees around 650 million monthly users, up from around 450 million at the start of 2025, Demand Sage reports. More noteworthy than popularity, though, is performance. That's where Gemini 3 is already making a name for itself. But even there, Gemini's new success is not just about raw power but better usability, according to industry analysts as reported in a Monday story by the Wall Street Journal (subscription required). Also: I let Gemini Deep Research dig through my Gmail and Drive - here's what it uncovered All the power and AI benchmarks in the world wouldn't mean much if the average person couldn't use the AI easily, effectively, and reliably. Those are three areas where an AI often falls down on the job. But here, Gemini 3 seems to be earning kudos. The model has been able to better hold coherent conversations while reducing errors, the Journal said. That has raised expectations for AI in both the business and consumer markets. "This rollout reflects a shift in AI development where innovation focuses on practical deployment and user-centric design," the Journal said. "By closing gaps in understanding and interaction, Gemini makes conversational AI more reliable and applicable." Of course, there's still the race with ChatGPT to see which AI can win over more people. "We have to recognize that Gemini's the biggest threat to ChatGPT we've seen so far," investment guru and media personality Jim Cramer said on his CNBC show. "There's simply no two ways about it -- Gemini's existential for OpenAI," he said. Though Cramer claims that any business reliant on ChatGPT just became more precarious, he said he certainly wouldn't write off ChatGPT. That's because OpenAI may have a "revolutionary version of its own product" in the works. In the midst of Cramer's usual high-octane commentary, he makes a valid point. The battle for AI supremacy is far from over. Also: How to turn off Gemini in your Gmail, Photos, Chrome, and more - it's easy to opt out of AI I wouldn't be surprised if OpenAI does have an innovative new version of ChatGPT waiting in the wings. For now, though, Gemini is enjoying waves of acclaim and excitement. If Google wants to hang onto that buzz, then it also needs to be working on the next iteration of its AI to show that it aims to stay in the game.
[8]
'Holy shit': Gemini 3 is winning the AI race -- for now
When an AI model release immediately spawns memes and treatises declaring the rest of the industry cooked, you know you've got something worth dissecting. Google's Gemini 3 was released Tuesday to widespread fanfare. The company called the model a "new era of intelligence," integrating it into Google Search on day one for the first time. It's blown past OpenAI and other competitors' products on a range of benchmarks and is topping the charts on LMArena, a crowdsourced AI evaluation platform that's essentially the Billboard Hot 100 of AI model ranking. Within 24 hours of its launch, more than one million users tried Gemini 3 in Google AI Studio and the Gemini API, per Google. "From a day one adoption standpoint, [it's] the best we've seen from any of our model releases," Google DeepMind's Logan Kilpatrick, who is product lead for Google's AI Studio and the Gemini API, told The Verge. Even OpenAI CEO Sam Altman and xAI CEO Elon Musk publicly congratulated the Gemini team on a job well done. And Salesforce CEO Marc Benioff wrote that after using ChatGPT every day for three years, spending two hours on Gemini 3 changed everything: "Holy shit ... I'm not going back. The leap is insane -- reasoning, speed, images, video... everything is sharper and faster. It feels like the world just changed, again." "This is more than a leaderboard shuffle," said Wei-Lin Chiang, cofounder and CTO of LMArena. Chiang told The Verge that Gemini 3 Pro holds a "clear lead" in occupational categories including coding, match, and creative writing, and its agentic coding abilities "in many cases now surpass top coding models like Claude 4.5 and GPT-5.1." It also got the top spot on visual comprehension and was the first model to surpass a ~1500 score on the platform's text leaderboard. The new model's performance, Chiang said, "illustrates that the AI arms race is being shaped by models that can reason more abstractly, generalize more consistently, and deliver dependable results across an increasingly diverse set of real-world evaluations." Alex Conway, principal software engineer at DataRobot, told The Verge that one of Gemini 3's most notable advancements was on a specific reasoning benchmark called ARC-AGI-2. Gemini scored almost twice as high as OpenAI's GPT-5 Pro while running at one-tenth of the cost per task, he said, which is "really challenging the notion that these models are plateauing." And on the SimpleQA benchmark -- which involves simple questions and answers on a broad range of topics, and requires a lot of niche knowledge -- Gemini 3 Pro scored more than twice as high as OpenAI's GPT-5.1, Conway flagged. "Use case-wise, it'll be great for a lot more niche topics and diving deep into state-of-the-art research and scientific fields," he said. But leaderboards aren't everything. It's possible -- and in the high-pressure AI world, tempting -- to train a model for narrow benchmarks rather than general-purpose success. So to really know how well a system is doing, you have to rely on real-world testing, anecdotal experience, and complex use cases in the wild. The Verge spoke with professionals across disciplines who use AI every day for work. The consensus: Gemini 3 looks impressive, and it does a great job on a wide breadth of tasks -- but when it comes to edge cases and niche aspects of certain industries, many professionals won't be replacing their current models with it anytime soon. The majority of people The Verge spoke with plan to continue to use Anthropic's Claude for their coding needs, despite Gemini 3's advancements in that space. Some also said that Gemini 3 isn't optimal on the user interaction front. Tim Dettmers, assistant professor at Carnegie Mellon University and a research scientist at Ai2, said that though it's a "great model," it's a bit raw when it comes to UX, meaning "it doesn't follow instructions precisely." Tulsee Doshi, Google DeepMind's senior director of product management for Gemini and Gen Media, told The Verge that the company prioritized bringing Gemini 3 to a variety of Google products in a "very real way." When asked about the instruction-following concerns, she said it's been helpful to see "where folks are hitting some of the sticking points." She also said that since the Pro model is the first release in the Gemini 3 suite, later models will help "round out that concern." Joel Hron, CTO of Thomson Reuters, said that the company has its own internal benchmarks it's developed to rank both its internal models and public ones on the areas that are most relevant to their work -- like comparing two documents up to several hundreds of pages in length, interpreting a long document, understanding legal contracts, and reasoning in the legal and tax spaces. He said that so far, Gemini 3 has performed strongly across all of them and is "a significant jump up from where Gemini 2.5 was." It also outperforms several of Anthropic's and OpenAI's models right now in some of those areas. Louis Blankemeier, cofounder and CEO of Cognita, a radiology AI startup, said that in terms of "pure numbers" Gemini 3 is "super exciting." But, he said, "we still need some time to figure out what the real-world utility of this model is." For more general domains, Blankemeier said, Gemini 3 is a star, but when he played around with it for radiology, it struggled with correctly identifying subtle rib fractures on chest X-rays, as well as uncommon or rare conditions. He calls radiology akin to self-driving cars in many ways, with a lot of edge cases -- so a newer, more powerful model may still not be as effective as an older one that's been refined and trained on custom data over time. "The real world is just so much more difficult," he said. Similarly, Matt Hoffman, head of AI at Longeye, a company providing AI tools for law enforcement investigations, sees promise in the Gemini 3 Pro-powered Nano Banana Pro image generator. Image generators allow Longeye to create convincing synthetic datasets for testing, letting it keep real, sensitive investigation data secure. But although the benchmarks are impressive, they may not map to the company's actual use cases. "I'm not confident Longeye could swap out a model we're using in production for Gemini 3 and see immediate improvements," he said. Other companies also say they're excited about Gemini -- but not necessarily using it to replace everything else. Built, a construction lending startup, currently uses a mix of foundational models from Google, Anthropic, OpenAI, and others to analyze construction draw requests -- a package of documents often sent to a construction lender, like invoices and proof of work done, requesting that funds be paid. This requires multimodal analysis of text and images, plus a large context window for the main agent delegating tasks to the others, VP of engineering Thomas Schlegel told The Verge. That's part of what Google promises with Gemini 3, so the company is currently exploring switching it out for 2.5. "In the past we've found Gemini to be the best at all-purpose tasks, and 3 looks to be a big step forward along those same lines," Schlegel said. "It's everything we love about Gemini on steroids." But he doesn't yet think it will replace all the other models, including Claude for coding tasks and OpenAI products for business reasoning. For Tanmai Gopal, cofounder and CEO of AI agent platform PromptQL, the stir Gemini 3 has caused is valid, but "it's definitely not the end of anything" for Google's competitors. AI models are becoming better and cheaper, and since they're on such quick release cycles, "one is always ahead of the pack for a period of time." (For instance, the day after Gemini 3 came out, OpenAI released GPT-5.1-Codex-Max, an update to a week-old model, ostensibly to challenge Gemini 3 on a few coding benchmarks.) Gopal said PromptQL is still working on internal evaluations to decide how, if at all, the team's model choices will change, but "initial results aren't necessarily showing something drastically better" than their current lineup. He said his current preference is Claude for code generation, ChatGPT for web search, and GPT-5 Pro for "deep brainstorming," but he may incorporate Gemini 3 as a default model, since it's "probably best-in-class for consumer tasks across creative, text, [and] image." And like virtually every model, Gemini 3 has had moments of what I'll dub "robotic hand syndrome" -- when an AI system does something complex with flying colors but gets gobsmacked by the simplest query, akin to the robotic hands of yesteryear having trouble gripping a soda can. Famed researcher Andrej Karpathy, who was a founding member of OpenAI and former director of AI at Tesla, wrote on X after testing Gemini 3 that he "had a positive early impression yesterday across personality, writing, vibe coding, humor, etc., very solid daily driver potential, clearly a tier 1 LLM," but he noted that the model refused to believe him when he said it was 2025 and later said it had forgotten to turn on Google Search. (He ascertained that in early testing, he may have been given a model with a stale system prompt.) In The Verge's own experience testing Gemini 3, we found it "delivers reasonably well -- with caveats." It likely won't stay on top forever, but it's an unmistakable step up for the company. "You're sort of in this leapfrog game from model to model, month to month, when a new one drops," Hron said. "But what stuck to me about Google's release is it makes substantial improvements across many dimensions of models -- so it's not like it just got better at coding or it just got better at reasoning ... It really, across the board, got a good bit better."
[9]
ChatGPT Who? Google Releases Gemini 3, Now in AI Mode
Days after OpenAI released GPT 5.1, Google is introducing its own, potentially more powerful AI model with Gemini 3, which starts rolling out to users today. The company says Gemini 3 is Google's most intelligent AI model. It hyped up Gemini 2.5 with the same language back in March, but this time, Google is confident enough to release Gemini 3 immediately to everyone. "This is the first time we are shipping Gemini in Search on day one," says CEO Sundar Pichai. Gemini 3 is rolling out via AI Mode, Google's ChatGPT-like interface on the Google search engine. One standout feature is how Gemini 3 can display "immersive visual layouts and interactive tools and simulations, all generated completely on the fly based on your query." The new model is also launching for all users in the Gemini app. And Google promises a boost for regular searches, too. For each query, the company plans to use Gemini 3 to perform "even more searches to uncover relevant web content" that may have been missed previously. In addition, the company is signaling that Gemini 3 will eventually power AI Overviews, the Google Search function that automatically summarizes the answer to your query (for better or worse) at the top of the results page. "In the coming weeks, we're also enhancing our automatic model selection in Search with Gemini 3. This means Search will intelligently route your most challenging questions in AI Mode and AI Overviews to this frontier model -- while continuing to use faster models for simpler tasks," the company wrote in a separate blog post. That said, the Gemini 3 integration with AI Overviews will initially be limited to paid subscribers of Google's AI plans. In the US, they can also use a more powerful Gemini 3 Pro model starting today via AI Mode by selecting the "Thinking" option from the model drop-down menu. According to Google, the Gemini 3 model outperforms GPT 5.1 and Anthropic's Claude Sonnet 4.5 across a wide range of AI-related benchmarks, including math, scientific reasoning, and multilingual questions and answers. "It's state-of-the-art in reasoning, built to grasp depth and nuance -- whether it's perceiving the subtle clues in a creative idea, or peeling apart the overlapping layers of a difficult problem," Pichai added. Ironically, though, the same benchmarks also indicate Gemini 3 struggles with "visual reasoning puzzles" and "academic reasoning." However, the Google model still beats the competition in these tests. The company also says Gemini 3 has been built to resist "sycophancy" and "prompt injection" attacks that can manipulate the AI into executing malicious instructions. To attract software developers, the company also announced "Google Antigravity," a suite of AI-powered tools for computer programming, an apparent response to rival coding programs from OpenAI and Anthropic. "Using Gemini 3's advanced reasoning, tool use, and agentic coding capabilities, Google Antigravity transforms AI assistance from a tool in a developer's toolkit into an active partner," Google's CEO said. "Now, agents can autonomously plan and execute complex, end-to-end software tasks simultaneously on your behalf while validating their own code." In addition, Google has developed an even smarter "Gemini 3 Deep Think mode." But the company wants to be careful with its release. "We're taking extra time for safety evaluations and input from safety testers before making it available to Google AI Ultra subscribers in the coming weeks," the company explained.
[10]
Google releases Gemini 3 with new reasoning and automation features
The model is now embedded across Google's core products, introducing new agent features and automation tools that raise strategic questions for enterprise IT leaders. Google has launched Gemini 3 and integrated the new AI model into its search engine immediately, aiming to push advanced AI features into consumer and enterprise products faster as competition in the AI market intensifies. The release brings new agentic capabilities for coding, workflow automation, and search, raising questions about how quickly businesses can adopt these tools and what impact they may have on existing IT operations. Google also introduced new agent features, including Gemini Agent and the Antigravity development platform, designed to automate multi-step tasks and support software teams.
[11]
Google just rolled out Gemini 3 to Search - here's what it can do and how to try it
The new Search is available to Google AI and Ultra subscribers. Google just made the biggest change to its search engine since the company debuted AI Overviews in it last year. The company officially debuted Gemini 3, its latest AI model, on Tuesday morning, and has already integrated it with Search, which Google says will enable deeper contextual awareness, more sophisticated reasoning capabilities, and multimedia responses to help users unlock more useful information from the web. Also: How to get rid of AI Overviews in Google Search: 4 easy ways This marks the first time that a new AI model from Google has been fused with its search engine from the jump, which signals a growing level of confidence from the company as it races against OpenAI, Microsoft, Meta, and Amazon -- both to deploy new models and make them accessible through existing consumer-facing tools. The newly upgraded Google search is designed to simultaneously optimize for both range and specificity: it covers a wider portion of the web to search for all relevant results to a given query, and is also engineered to read between the lines of that query, as it were, to get a clear sense of the user's true intent. "Gemini 3 brings incredible reasoning power to Search because it's built to grasp unprecedented depth for your hardest questions," Elizabeth Reid, VP and Head of Search at Google, wrote in a company blog post. Also: I tried Google's new trip-planning AI tool, and I'll never plan my own trip again Google has growing competition in this regard. AI startups like OpenAI and Perplexity have launched their own AI-powered web browsers with an eye toward stealing a slice of the pie that's long been hoarded by Google with its undisputed dominance over online search. But Google has the obvious advantage: that millions of people already rely on its browser and search engine, which means that it can easily embed its new AI model into those people's daily online routines, just as it did -- like it or not -- with AI Overviews. Reid added in the blog post that, in the coming weeks, Google will also update its automatic model selection feature in Search, so that the most challenging queries automatically get funneled to Gemini 3 while older and faster models tackle easier tasks. Google Search is also getting a multimodal upgrade thanks to the newly released Gemini 3. Also: I found an open-source NotebookLM alternative that's powerful, private - and free Rather than just responding with text, web links, and images, Gemini 3 in AI Mode can automatically generate visual aids to help users gain a more thorough understanding of the information they're seeking. "When the model detects that an interactive tool will help you better understand the topic, it uses its generative capabilities to code a custom simulation or tool in real-time and adds it into your response," Reid wrote in the blog post. If you're trying to wrap your mind around the infamous double-slit experiment, for example -- which is foundational to quantum dynamics and shows that subatomic particles can act as both particles and waves -- the newly upgraded, multimodal Google search might provide you with an interactive simulation so that, rather than just reading about the experiment, you can directly engage with it. Welcome news, no doubt, to the many people out there who consider themselves visual and/or hands-on learners. Gemini 3 Pro, the first of the new family of Gemini 3 family of models, is available now for Google AI Pro and Ultra subscribers. To take the new model's search capabilities for a spin, just select "Thinking" from the drop-down menu in AI Mode. The company plans to release the new model via AI mode to all US users soon, with higher limits for Google AI Pro and Ultra subscribers.
[12]
Gemini 3 is almost as good as Google says it is
Google set the bar high for Gemini 3. It's promising a bunch of upgraded features in its shiny new AI model, from generating code that produces interactive 3D visualizations to "agentic" capabilities that complete tasks. But as we've seen in the past, what's advertised doesn't always match up to reality. So we put some of Google's claims to the test and found that Gemini 3 delivers reasonably well -- with caveats. Google announced the Gemini 3 family of models earlier this week, with the flagship Gemini 3 Pro rolling out to users first. Gemini 3 Pro is supposed to come with big upgrades to reasoning, along with the ability to provide more concise and direct responses compared to Google's previous models. Some of the biggest promised improvements are to Canvas, the built-in workspace inside the Gemini app, where you can ask the AI chatbot to generate code, as well as preview the output. When building in Canvas, Google says Gemini 3 can interpret material from different kinds of sources at the same time, like text, images, and videos. The model can handle more complex prompts as well, allowing it to generate richer, more interactive user interfaces, models, and simulations, according to Google. The company says Gemini 3 is "exceptional" at zero-shot generation, too, which means it's better at completing tasks that it hasn't been trained on. For my first test, I tried out one of the more complex requests that Google showed off in one of its demos: I asked Gemini 3 to create a 3D visualization of the difference in scale between a subatomic particle, an atom, a DNA strand, a beach ball, the Earth, the Sun, and the galaxy, as shown here. Gemini 3 created an interactive visual similar to what Google demonstrated, allowing me to scroll through and compare the size of different elements, which appeared to correctly list each one from small to large, starting at the proton and maxing out at the cosmic web. (To be fair, I'd hope Gemini could figure out that a beach ball is much smaller than the Sun.) It included almost everything shown in the demo, but its image quality fell short in a couple of areas, as the 3D models of the strand of DNA and beach ball were quite dim compared to what Google showed. I saw much of the same when feeding Google's other demos into Gemini. The model spit out the correct concept, but it was always a little shoddier, whether it had lower resolution or was just a little more disorganized. Gemini 3's output didn't quite stack up to Google's demo when I tried something a little simpler, either. I asked it to re-create a model of a voxel-art eagle sitting on a tree branch, and while my results were quite similar to the demo, I couldn't help but notice that the eagle didn't have any eyes, and the trees were trunkless. Branching out from Google's example, a voxel-style panda came out alright, but standard 3D models of a penguin and turtle came out quite primitive, with little to no detail. But Gemini 3 isn't just built for prototyping and modeling; Google is testing a new "generative UI" feature for Pro subscribers that packages its responses inside a "visual" magazine-style interface, or in the form of a "dynamic" interactive webpage. I only got access to Gemini 3's visual layout, which Google showed off as a way to envision your travel plans, like a three-day trip to Rome. When I tried out the Rome trip prompt, Gemini 3 presented me with what looked like a personalized webpage featuring an itinerary, along with options to customize it further, such as whether I'd prefer a relaxed or fast-paced vacation or if I prioritize certain dining styles. Once you submit your preferences, Gemini 3 will redesign the layout to match your selections. I found that this feature can provide interactive guides on other topics, too, like how to build a computer or set up an aquarium. Next up, I did a little experimenting with Gemini Agent, a feature Google is testing for Ultra subscribers inside the Gemini app. Like other agentic features, Gemini Agent aims to perform tasks on your behalf, such as adding reminders to your calendar and creating reservations. One example shared by Google shows Gemini Agent organizing a Gmail inbox, so I asked the tool to do the same -- and, well, it followed my orders. It found 99 unread emails from the last week and displayed them inside an interactive chart. Gemini provided options to set up reminders for what appeared to be the most important ones, such as RSVPs and a bill, while offering buttons to archive emails it identified as promotions. I asked Google Gemini to schedule a reminder to pay my bill, and the AI assistant put it inside Google Tasks with the correct due date. When I asked it to pay the bill, it navigated the billing interface and came close to asking me to enter my payment details, but (given the security concerns around agentic AI) I stopped short of letting it proceed. While you could just organize your inbox manually, I found Gemini 3's assistance somewhat helpful, as it dug up a few forgotten emails that I might've missed. You can also ask Gemini to find and unsubscribe from spammy email providers in bulk, which is nice. Between Perplexity's AI assistant, ChatGPT, and Gemini, Google's AI chatbot (predictably) offers the richest integration with Gmail. Perplexity will pull up emails listed in your inbox, but you'll need to tell it which ones to keep, archive, or delete, instead of just hitting a button like you can with Gemini. For some reason, ChatGPT refused to organize my inbox, claiming its integration with Gmail is in a "read-only" mode despite readily sending an email through the app on my behalf. But while Gemini is directly connected to Gmail, it was still far slower at sending emails in the app when compared to Perplexity. Gemini almost managed to book a restaurant reservation without intervention, only to falsely tell me that there's a "cost" associated with making the booking right before finalizing it. When I asked about the charge, Gemini 3 backpedaled and said it "likely referred to" the restaurant's 16 percent service charge. It then proceeded to ask me to confirm my reservation three times and then told me there was a financial transaction involved again. Sigh. Again, I felt that I could complete these tasks far faster myself. Despite the hiccups with completing tasks, Gemini 3 Pro's interactive visualization features were impressive, and I could see how interactive models or visual layouts could be useful in some scenarios -- though I can't see myself using them on a daily basis, and Gemini's text-based answers are usually informative enough for me. For now, I think I'll just keep using Gemini like I always do: for questions I might not immediately find by browsing the web.
[13]
Google announces Gemini 3 as battle with OpenAI intensifies
Google is debuting its latest artificial intelligence model, Gemini 3, as the search giant races to keep pace with ChatGPT creator OpenAI. The new AI model will allow users to get better answers to more complex questions, "so you get what you need with less prompting," Alphabet CEO Sundar Pichai said in one of several blog posts Google published Tuesday. Gemini 3 will be integrated into the Gemini app, Google's AI search products AI Mode and AI Overviews, as well as its enterprise products. The rollout begins Tuesday for select subscribers and will go out more broadly in the coming weeks. The announcement comes about eight months after Google introduced Gemini 2.5 and 11 months after Gemini 2.0. OpenAI, which kicked off the generative AI boom in late 2022 with the public launch of ChatGPT, introduced GPT-5 in August. "It's amazing to think that in just two years, Al has evolved from simply reading text and images to reading the room," Pichai wrote in one of Tuesday's posts. "Starting today, we're shipping Gemini at the scale of Google." The Gemini app now has 650 million monthly active users and AI Overviews has 2 billion monthly users, the company said. OpenAI said in August that ChatGPT hit 700 million weekly users. Pichai added that the newest model is "built to grasp depth and nuance," and said Gemini 3 is also "much better at figuring out the context and intent behind your request, so you get what you need with less prompting." Google's other AI models may still be used for simpler tasks, the company said. Alphabet and its megacap rivals are spending heavily to build out the infrastructure for AI development and to rapidly create more services for consumers and businesses. In their earnings reports last month, Alphabet, Meta, Microsoft and Amazon each lifted their guidance for capital expenditures, and collectively expect that number to reach more than $380 billion this year. Google said AI responses powered by Gemini 3 will be "trading cliché and flattery for genuine insight -- telling you what you need to hear, not what you want to hear," according to a statement from Demis Hassabis, CEO of Google's AI unit DeepMind. Industry critics have said today's AI chatbots are too sycophantic. Last week, OpenAI issued two updates to GPT-5. One is "warmer, more intelligent, and better at following your instructions," the company said, and the other is "faster on simple tasks, more persistent on complex ones."
[14]
Google's new Gemini 3 model arrives in AI Mode and the Gemini app
A few weeks short of , Google has . Naturally, the company claims the new system is its most intelligent AI model yet, offering state-of-the-art reasoning, class-leading vibe coding performance and more. The good news is you can put those claims to the test today, with Google making Gemini 3 Pro available across many of its products and services. Google is highlighting a couple of benchmarks to tout Gemini 3 Pro's performance. In , widely considered one of the toughest tests AI labs can put their systems through, the model delivered a new top accuracy score of 37.5 percent, beating the previous leader, Grok 4, by an impressive 12.1 percentage points. Notably, it achieved its score without turning to tools like web search. On , meanwhile, Gemini 3 Pro is now on top of the site's leaderboards with a score of 1,501 points. Okay, but what about the practical benefits of Gemini 3 Pro? In the , the new model will translate to answers that are more concise and better formatted. It also enables a new feature Google calls Gemini Agent. The tool builds on , the web-surfing Chrome AI the company debuted at the end of last year. It allows users to ask Gemini to complete tasks for them. For example, say you want help managing your email inbox. In the past, Gemini would have offered some general tips. Now, it can do that work for you. To try Gemini 3 Pro inside of the Gemini app, select "Thinking" from the model picker. The new model is available to everyone, though AI Plus, Pro and Ultra subscribers can use it more often before hitting their rate limit. To make the most of Gemini Agent, you'll need to grant the tool access to your Google apps. In Search, meanwhile, Gemini 3 Pro will debut inside of , with availability of the new model first rolling out to AI Pro and Ultra subscribers. Google will also bring the model to AI Overviews, where it will be used to answer the most difficult questions people ask of its search engine. In the coming weeks, Google plans to roll out a new routing algorithm for both AI Mode and AI Overviews that will know when to put questions through Gemini 3 Pro. In the meantime, subscribers can try the new model inside of AI Mode by selecting "Thinking" from the dropdown menu. In practice, Google says Gemini 3 Pro will result in AI Mode finding more credible and relevant content related to your questions. This is thanks to how the new model augments the fan-out technique that powers AI Mode. The tool will perform even more searches than before and with its new intelligence, Google suggests it may even uncover content previous models may have missed. At the same time, Gemini 3's better multi-modal understanding will translate to AI Mode generating more dynamic and interactive interfaces to answer your questions. For example, if you're researching mortgage loans, the tool can create a loan calculator directly inside of its response. For developers and its enterprise customers, Google is bringing Gemini 3 to all the usual places one can find its models, including inside of the Gemini API, AI Studio and Vertex AI. The company is also releasing a new agentic coding app called Antigravity. It can autonomously program while creating tasks for itself and providing progress reports. Alongside Gemini 3 Pro, Google is introducing Gemini 3 Deep Think. The enhanced reasoning mode will be available to safety testers before it rolls out to AI Ultra subscribers.
[15]
Marc Benioff Joins the Chorus, Says Google Gemini Is Eating ChatGPT's Lunch
Despite its excessive spending on data centers with no clear path to revenue generation in front of it, it seemed that if OpenAI had just one thing it could count on, it was audience capture. ChatGPT seemed like it would get the brand verbification treatment, being the term people used to reference AI. Now it seems like that might be slipping away. Since the release of Google's Gemini 3 model, it's like all anyone on the AI-obsessed corners of the web can talk about is how much better it is than ChatGPT. Marc Benioff, the CEO of Salesforce and longtime ChatGPT fanboy, is perhaps the loudest convert out there. On X, the exec said, "Holy shit. I’ve used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I’m not going back." He called the improvement of the model over past versions "insane," claiming that "everything is sharper and faster." He's not alone in that assessment. Exited OpenAI co-founder Andrej Karpathy called Gemini 3 "clearly a tier 1 LLM" with "very solid daily driver potential." Stripe CEO Patrick Collison went out of his way to praise Google's latest release, too, which is noteworthy given Stripe's partnership with OpenAI to build AI-driven transactions. Apparently, what he saw with Gemini was too hard not to comment on. The feedback from the C-suites around the tech world follows weeks of buzz over on AI Twitter that Gemini was going to be a game-changer. It certainly got presented as such right out of the gate, as Google made a point to highlight how its latest model topped just about every benchmarking test that was thrown at it (though your mileage may vary on just how meaningful any of those are). But even the folks behind the benchmark measures appear to be impressed. According to The Verge, the cofounder and CTO of AI benchmarking firm LMArena, Wei-Lin Chiang, said that the release of Gemini 3 represents "more than a leaderboard shuffle" and “illustrates that the AI arms race is being shaped by models that can reason more abstractly, generalize more consistently, and deliver dependable results across an increasingly diverse set of real-world evaluations.†The timing of Google's resurgence in the AI space could not come at a worse time for OpenAI, which currently cannot shake questions from skeptics who are unclear on how the company is ever going to make good on its multi-billion-dollar financial commitments. The company has been viewed as a linchpin of the AI industry, and that industry has increasingly received scrutiny for what seems to be some circular investments that may be artificially propping up the entire economy. Now it seems that even its image as the ultimate innovator in that space is in question, and it has a new problem: the fact that Google can definitely outspend it without worrying nearly as much about profitability problems.
[16]
Gemini 3 is here: Google's most advanced model promises better reasoning, coding, and more
Gemini 3 is Google's most intelligent AI model to date, promising improvements in reasoning, coding, and multimodal capabilities. Users will be able to ask even more difficult and puzzling questions to Gemini (including logic puzzles and math problems), and find that it aces them with ease. The AI model will also be able to execute even more complex coding tasks with this update. Furthermore, it also promises to excel in multimodal analysis, enabling users to combine media, text, and other formats within the same prompt for a seamless, all-around experience. All of this comes with a better grasp of the context and intent behind your prompt.
[17]
Google unveils Gemini 3 AI model, Antigravity agentic development tool
Gemini 3 excels at coding, agentic workflows, and complex zero-shot tasks, while Antigravity shifts AI-assisted coding from agents embedded within tools to an AI agent as the primary interface, Google said. Google has introduced the Gemini 3 AI model, an update of Gemini with improved visual reasoning, and Google Antigravity, an agentic development platform for AI-assisted software development. Both were announced on November 18. Gemini 3 is positioned as offering reasoning capabilities with robust function calling and instruction adherence to build sophisticated agents. Agentic capabilities in Gemini 3 Pro are integrated into agent experiences in tools such as Google AI Studio, Gemini CLI, Android Studio, and third-party tools. Reasoning and multimodal generation enable developers to go from concept to a working app, making Gemini 3 Pro suitable for developers at any experience level, Google said. Gemini 3 surpasses Gemini 2.5 Pro at coding, mastering both agentic workflows and complex zero-shot tasks, according to the company. Gemini 3 is available in preview at $2 per million input tokens and $12 per million output tokens for prompts of 200K tokens or less through the Gemini API in Google AI Studio and Vertex AI for enterprises. Google Antigravity, meanwhile, shifts from agents embedded within tools to an AI agent as the primary interface. It manages surfaces such as the browser, editor, and the terminal to complete development tasks. While the core is a familiar AI-powered IDE experience, Antigravity is evolving the IDE toward an agent-first future with browser control capabilities, asynchronous interaction patterns, and an agent-first product form factor, according to Google. Together, these enable agents to autonomously plan and execute complex, end-to-end software tasks. Available in public preview, Antigravity is compatible with Windows, Linux, and macOS.
[18]
Google's latest AI model invades Search on day one
For the past week now, the tech (and gambling) sphere has been buzzing with anticipation about Google's latest Gemini model. The speculation surrounding Gemini 3, however, finally ends now. The tech giant, in a new keyword blog post, just made its most intelligent large language model official, paired with 'Generative Interfaces,' and an all-new 'Gemini Agent.' Gemini 3 is official There's a lot that's new with Gemini 3, but more importantly, this marks the first time Google has brought its new flagship AI model straight to Google Search (starting with AI Mode) on day one. According to the tech giant, the new model will set a new bar for AI model performance. "You'll notice the responses are more helpful, better formatted and more concise," wrote Google, adding that Gemini 3 is the best model in the world for multimodal tasks. This means that for tasks like analyzing photos, reasoning, document analysis, transcribing lecture notes, and more, you'll notice better performance from Gemini 3 than its predecessors (and potentially even competitors). On paper, Gemini 3 Pro boasts a score of 1501 on LMArena, ranking higher than Gemini 2.5 Pro's 1451 score. Google AI Pro and Ultra subscribers in the US can start experimenting with Gemini 3 Pro starting today. To do so, head to Google Search > AI Mode > select 'Thinking' from the model drop-down. The model will expand to everyone in the US "soon," with AI Pro and Ultra plan holders retaining higher usage limits. Generative interfaces end Gemini's static UI Think of generative interface as dynamic prompt-based UIs that change depending on your specific requests. The new feature is powered by two experiments, namely visual layout and dynamic view. The former kicks in when manually selected. Instead of answering your queries in a plain text-based format, visual layout triggers an immersive, magazine-style view, complete with photos and module. For reference, prompts like "plan a 3-day trip to Rome next summer" will highlight a visual itinerary -- something like this: Dynamic view, on the other hand, changes the entire Gemini user interface. Leveraging Gemini 3's agentic coding capabilities, the feature essentially designs and codes a custom UI in real-time. The UI it designs is suited to your prompt. For example, prompting something like "explain the Van Gogh Gallery with life context for each piece" will highlight something like this: Dynamic View and visual layout are rolling out now. Gemini Agent arrives Likely the most ambitious of the bunch, Gemini Agent, as Google describes it, is "an experimental feature that handles multi-step tasks directly inside Gemini." The agent, which can connect to your Google apps like Calendar, reminders, and more, can do a lot. For example, you can simply ask it to "organize my inbox," and it will go through your to-dos and even draft replies to emails for your approval. Alternatively, you can give the agent complex multi-command tasks to fulfill. Think something like "Research and help me book a mid-size SUV for my trip next week under $80/day using details from my email." The agent would locate flight information from Gmail, compare rentals within budget, and prepare the booking for you. Powered by Gemini 3, the agent, which needs to be manually selected from the Gemini app's 'tools,' can take action across other Gemini tools like Deep Research, Canvas, connected Workspace apps, live browsing, and more. Gemini Agent is available to try out starting today, but only for US-based Google AI Ultra subscribers on the web. The tech giant did not hint at the feature expanding to more users anytime soon.
[19]
Google launches Gemini 3 with major boost in AI reasoning power
Google just fired the next shot in the AI arms race, and it's a big one. The company has unveiled Gemini 3, calling it its most powerful reasoning and multimodal model yet, and positioning it as the centerpiece of a new era of agentic, deeply interactive AI. The new release marks a leap in performance across nearly every major benchmark, from mathematics to multimodal analysis. Google says Gemini 3 Pro, rolling out today in preview, is now baked across a suite of its consumer and developer products.
[20]
Google's Gemini 3 is finally here and it's smarter, faster, and free to access
The model is faster, smarter, and better at coding and multimodality. With its Gemini AI models, Google has integrated its AI assistant across several of its most popular platforms, including Search, Google Workspace, and Android devices. Now, the company is releasing its latest and greatest model to upgrade user experiences throughout its ecosystem. On Tuesday, Google finally launched Gemini 3, which the company claims is the "best model in the world for multimodal understanding and our most powerful agentic and vibe coding model yet." The claim is supported by benchmark data, crowd-sourced Arena results, and more advanced use cases that the chatbot has not previously been able to tackle. Also: Let Gemini make your next slide presentation for you - here's how The best part? Everyone can try it across Google's suite of tools, including Search, the Gemini App, AI Studio, and Vertex AI, as well as a brand-new agentic development platform, Google Antigravity. To learn about what's new and what you can expect, keep reading below. Google is kicking off the Gemini 3 era with the launch of Gemini 3 Pro. Gemini 3 Pro outperforms its predecessor, Gemini 2.5 Pro, across all major AI benchmarks -- but before I get into the data, here's what it practically means for you. Gemini 3 Pro has been designed to provide users with better quality answers that have a new "level of depth and nuance," according to Google. This includes more smart, concise, and direct answers that focus on helpfulness rather than flattery. Of course, as the benchmarks support, the model is designed to be smarter and reason more effectively, demonstrating improved performance in Ph.D.-level reasoning that enables it to better tackle complex problems, such as those in science and mathematics. Also: How to turn off Gemini in your Gmail, Photos, Chrome, and more - it's easy to opt out of AI Google said the model also pushes the frontiers of multimodal reasoning by being able to synthesize information about a topic across multiple modalities, including text and images, video, audio, and code. For everyday help, this means you can input information in more ways, have the model understand, and output responses in whichever multimodal medium is best fit for your response. Lastly, for developers, the model reflects a major win, as it's the best vibe coding and agentic coding model Google has released, climbing to the top of the WebDev Arena, a leaderboard compiled through user votes after comparing the anonymous models' ability to build a site and voting for the best one. Developers can leverage coding capabilities through Google AI Studio, Vertex AI, Gemini CLI, and third-party platforms such as Cursor, GitHub, JetBrains, and others. The company also launched Google Antigravity, a brand-new agent development platform that functions as an active coding partner, allowing agents to plan and execute complex, end-to-end software tasks and even validate their own code. Also: Gemini is gaining fast on ChatGPT in one particular way, according to new data The agentic capabilities don't end there, with Google sharing that it will be able to take action on your behalf with more complex workflows, such as reorganizing your Gmail inbox. However, these more advanced capabilities are made accessible through the Gemini Agent in the Gemini App for Google AI Ultra subscribers. Gemini 3 Pro skyrocketed to the top of the LMArena Leaderboard with a score of 1,501 points, a noteworthy feat as that score is compiled through anonymous voting against top models from Anthropic, Meta, XAI, DeepSeek, and more. Reflecting its advanced reasoning, the model scored a 91.9% on the GPQA Diamond, a benchmark used to evaluate Ph.D.-level reasoning, a "Top-score" on Humanity's Last Exam (37.5% without usage of tools), a multimodal exam that tests a variety of fields, and an 81% on the MMMU-Pro and 87.6% on the Video MMMU, benchmarks used to test multimodal reasoning. Also: Gemini for Home is finally rolling out for early access - here's how to try it first The advanced reasoning capabilities also translate to higher scores on Math benchmarks, such as the Math-Arena-Apex, where the model scored a state-of-the-art 23.4%, according to Google. The company also launched Gemini 3 Deep Think, which has deeper intelligence, reasoning, and understanding capabilities. The Deep Think version outperformed Gemini 3 on Humanity's Last Exam (41% without using tools) and GPQA Diamond (93.8%). Everyone can access Gemini 3 in the Gemini App. To access Gemini 3 in AI Mode in Search, users must have a Google AI Pro or Ultra subscription. Developers can access it in the Gemini API in AI Studio, Antigravity, and Gemini CLI. Also: I let Gemini Deep Research dig through my Gmail and Drive - here's what it uncovered Lastly, enterprises can access it in Vertex AI in Gemini Enterprise.
[21]
Google is launching Gemini 3, its 'most intelligent' AI model yet
Google is beginning to launch Gemini 3 today, a new series of models the company says is its "most intelligent" and "factually accurate" AI systems yet. They're also a chance for Google to leap ahead of OpenAI following the rocky launch of GPT-5, potentially putting the company at the forefront of consumer-focused AI models. For the first time, Google is giving everyone access to its new flagship AI model -- Gemini 3 Pro -- in the Gemini app on day one. It's also rolling out Gemini 3 Pro to subscribers inside Search. Tulsee Doshi, Google DeepMind's senior director and head of product, says the new model will bring the company closer to making information "universally accessible and useful" as its search engine continues to evolve. "I think the one really big step in that direction is to step out of the paradigm of just text responses and to give you a much richer, more complete view of what you can actually see." Gemini 3 Pro is "natively multimodal," meaning it can process text, images, and audio all at once, rather than handling them separately. As an example, Google says Gemini 3 Pro could be used to translate photos of recipes and then transform them into a cookbook, or it could create interactive flashcards based on a series of video lectures. You'll spot some of these improvements across Google's suite of products, including the Gemini app, where you can build more "full-featured" programs inside the built-in workspace, Canvas. The upgraded AI model will also enable "generative interfaces," a tool Google is testing in Gemini Labs that allows Gemini 3 Pro to create a visual, magazine-style format with pictures you can browse through, or a dynamic layout with a custom user interface tailored to your prompt. Gemini 3 Pro in AI Mode -- the AI-powered Google Search feature -- will similarly present you with visual elements, like images, tables, grids, and simulations based on your query. It's also capable of performing more searches using an upgraded version of Google's "query fan-out technique," which now not only breaks down questions into bits it can search for on your behalf, but is better at understanding intent to help "find new content that it may have previously missed," according to Google's announcement. Google is also not so subtly jabbing at OpenAI, describing Gemini 3 Pro as less prone to the type of empty flattery espoused by ChatGPT. Doshi says you'll see "noticeable" changes to Gemini 3 Pro's responses, which Google describes as offering a "smart, concise and direct, trading cliche and flattery for genuine insight -- telling you what you need to hear, not just what you want to hear." The company says it also shows "reduced sycophany," an issue OpenAI had to address with ChatGPT earlier this year. Along with these improvements, Gemini 3 Pro comes with better reasoning and agentic capabilities, allowing it to complete more complex tasks and "reliably plan ahead over longer horizons," according to Google. The AI model is powering an experimental Gemini Agent feature that can perform tasks on your behalf inside the Gemini app, such as reviewing and organizing emails, or researching and booking travel. Gemini 3 Pro now sits at the top of LMArena's leaderboard, a popular platform used for benchmarking AI models. A Deep Think mode enhances the model's reasoning capabilities even further, though it's currently only available to safety testers. Gemini 3 Pro is available inside the Gemini app for everyone starting today, while Google AI Pro and Ultra subscribers in the US can try out Gemini Agent in the Gemini app, along with Gemini 3 Pro inside AI Mode by selecting "Thinking" from the model dropdown.
[22]
Google enters 'new era' of AI with its advanced Gemini 3 model
Google is launching Gemini 3, its most intelligent AI model yet. From the outset, users will have access to the flagship Gemini 3 Pro model, which is multimodal from the ground up and can process text, image, audio, and video in the same flow. This model is said to top several AI benchmark tests in logic, math, and fact checking. It's also said to provide shorter, more direct, and less biased responses. One new feature is that Gemini 3 can generate interfaces in the form of presentations or web page layouts based on the user's instructions. There is also an improved search feature with extended "query fan-out" that will allow the model to better understand the intent behind a question. In both the Gemini app and Google Search's AI Mode, answers are now also displayed in a way that combines visual elements such as images, tables, grids, and even interactive simulations. On top of that, Gemini 3 Pro has gained sharper reasoning and planning abilities, allowing it to handle more long-term and complex tasks. It powers Google's new experimental tool Gemini Agent, which can, for example, organize emails or book travel for users. Gemini 3 Pro is now available in the Gemini app in preview form. In the US, Google AI Pro and Ultra subscribers can also test the Agent feature as well as the new Advanced AI mode.
[23]
Gemini app rolling out Gemini 3 Pro as 'Gemini Agent' comes to AI Ultra
Google today is introducing new features for the Gemini app, led by Gemini Agent, made possible by Gemini 3 Pro. For starters, responses in the Gemini app are "more helpful, better formatted, and more concise." In Canvas, apps are "more full-featured," with Google calling Gemini 3 its "best vibe coding model ever." Gemini 3 Pro is available for all users starting today. From the model picker, select "Thinking" in a new change also shared by AI Mode. Google AI Plus, Pro, and Ultra subscribers will get higher limits. The "Labs" concept is coming to the Gemini app. Visual layout is the first experiment that creates an "immersive, magazine-style view complete with photos and modules" to answer your prompt. Gemini will show sliders, checkboxes, and other UI to further customize the result. For example, when trip planning, you might get a slider to set the pace, while filters what you select activity type. Dynamic view sees Gemini 3 design and code a "fully customized interactive response for each prompt." At launch, you might only see one of these experiments as Google gathers feedback. Gemini Agent brings what Google has learned with Project Mariner to the Gemini app. This experimental feature "handles multi-step tasks directly inside Gemini." This is made possible by Gemini 3's advanced reasoning, live web browsing, and tool use, including Canvas, Deep Research, Gmail, and Google Calendar. Gemini will have you confirm before sending emails, making purchases, and other critical actions, with users able to take over at any time. The prompt below is "organize my inbox," with the user selecting "Agent" from the Tools menu. Gemini Agent then groups together related emails with a table that lets you quickly archive and mark as read emails by tapping the checkmark. It can also create new Google Tasks reminders. Meanwhile, Google can now generate an email reply and send it directly with the Gemini app. The inline email interface lets you add recipients and change the body as needed. Another example is: "Book a mid-size SUV for my trip next week under $80/day using details from my email." Gemini will "locate your flight information, research rentals within budget and prepare the booking." This is available today for Google AI Ultra subscribers ($249.99 per month) in the US.
[24]
Gemini 3 is here -- 9 prompts to use that make the most of its new features
Google has just launched Gemini 3, the latest update to its AI system. As expected, this was a huge leap for the company, introducing everything from an AI agent to a variety of clever new tricks for the Gemini chatbot. However, like all major upgrades to an AI system, the way that you prompt it has once again changed. Sure, the old classics will continue to work. In fact, Gemini is now better than ever at understanding the context of your prompts, no matter what you ask. But with an update as big as this, it opens the opportunity to dive deep, using more detailed prompts to pull out the absolute best that it has to offer. Not sure where to start? We've got some prompts below that make the most of all of the new features included in Gemini 3. Gemini 3 is a major upgrade for Google. While it is being introduced across Google search, NotebookLM and more, it is the changes found in the Gemini app that are most impactful. There are improvements in its reasoning capabilities, how it deals with context and long sections of information, and even its emotional awareness and creativity. Gemini also announced a selection of new features within the app. This includes a new agentic mode, letting Gemini take control of your laptop to plan flights, book restaurants and more, as well as in-built coding tools and the ability to use the web via Gemini in much smarter ways. Prompt: "[Upload a screenshot of a complex chart or data table from a financial report]. Identify the single most significant trend shown in the data and write a short, one-paragraph summary explaining why that trend matters." Part of Gemini's improvements in reasoning capabilities makes it great at analysing images. Try uploading a complicated document and giving it a precise and challenging task to go along with it. Prompt: "Explain each of the planets in our solar system and what is important about each of them." Using Dynamic view (one of Gemini's new features for making unique interactions through code) is a great way to create an interactive experience for learning new subjects. Gemini advertised this with a working website about Van Gogh's life. Try asking Gemini to explain a complicated theory using this setting to make a subject more interesting. Prompt: "Analyze this floor plan image. Identify the best layout for a living room including a sofa, TV unit, arm chair and a standing lamp. Suggest ways to make the most of the natural light and any concerns this layout might have." Gemini's contextual understanding can make it a great tool for understanding your surroundings. Want to redecorate your living room? Gemini can help you with that. Provide a floor plan and some ideas and see what you get back. Prompt: "Research and help me find a 43-inch TV that is under $300. Then find the best deal on one right now." Gemini has followed in the footsteps of ChatGPT and Perplexity, launching its own Agent tool. This can be found at the bottom of Gemini's 'tools' section on the Gemini chatbar. Select this and give it a task to work through. I've used Agents before to find cheap TV deals, book restaurants or even order a pizza to my house. Prompt: "Rewrite the following sentence in two distinct styles: 1) a highly bureaucratic, formal memo, and 2) a casual, conversational text message: 'The quarterly financial review documents must be submitted to the audit committee by 5 PM on Friday." AI chatbots are becoming much better at tone and emotional intelligence. Because of this, you can get some strong advice for how to shape your messages to better fit certain tones. Don't know if you want to sound friendly or more serious in an email, see what both might look like and then decide. Prompt: "Describe an ideal outfit, including colors, fabrics, and specific items (e.g., 'wool blazer,' 'linen trousers'), for a person attending an outdoor, professional networking event in early autumn in New York. I have attached images for inspiration." There's a lot going on in this prompt, but it shows an important part of Gemini's new brains. It can take on a lot of information. Here, it needs to use context to provide an outfit, including materials, the location and based on images that it has been given. Just half a year ago or so, these kinds of prompts could cause AI chatbots to fall apart, getting confused on what they were being asked. However, they have come a long way since then. Prompt: "Help me plan a 7-day road trip around Western Europe with stops in Paris, Amsterdam and Brussels." One of Gemini's big new updates is known as generative interfaces. These are coming in two different ways first, layout and dynamic view. When you apply the visual layout, Gemini will create an interactive, magazine-like layout, complete with photos and modules for you to interact with. This makes it a great option for trip-planning, providing a full layout of where to stay, eat and visit on a set trip. Prompt: "Write a short story of 250 words. It should be from the perspective of two different people, both from 1960s New York. Use context from this period to generate a compelling story between the two." Creative writing through AI is not what it used to be. We laughed at what it turned out years ago, but now it is hard to tell the difference. Try giving Gemini some more complicated stories to attempt. For example, in this one it needs to use context from a certain period and use emotive language to power the story. Prompt: "Summarize this URL and find other pages that both support and undermine its statements. Once you have all of this information, create a report for me." A simple but incredibly useful prompt. Use Gemini to summarize URLs, documents and research papers. Then, if you want to know more, ask it to track down other bits of information. This can create a full report to better understand a subject.
[25]
Google Launches More Intelligent Gemini 3 Model
Google today introduced Gemini 3 Pro, its newest and most intelligent AI model. Google says that Gemini offers state-of-the-art reasoning, able to understand depth and nuance. It is also better at understanding the context and intent behind a request for more relevant answers. According to Google, Gemini 3 Pro is the best model in the world for multimodal understanding, outperforming Gemini 2.5 Pro on every major AI benchmark. Responses have been designed to be concise and direct, with less flattery. Google claims that it serves as a "true thought partner." Gemini 3 Pro is rolling out across Google platforms. It's been incorporated into AI mode in Search for Pro and Ultra subscribers, the Gemini app (select Thinking from the model selector), AI Studio, Vertex AI, and Google Antigravity, a new agentic development platform. AI Mode in Search will use Gemini 3 to provide new generative UI experiences like immersive visual layouts and interactive tools generated on the fly. Google AI Ultra subscribers can also use Gemini 3 with Gemini Agent as of today, with Gemini 3 able to execute multi-step workflows from start to finish. Gemini 3 Deep Think is even more intelligent, and Google says that it can solve more complex problems than Gemini 3 Pro. Gemini 3 Deep Think Mode will be available to Google AI Ultra subscribers in the coming weeks. As part of the Gemini 3 launch, Google redesigned the Gemini app to give it a more modern look. Google says that it's easier to start chats and find images, videos, and reports that you've created in a dedicated My Stuff folder. The shopping experience has been overhauled, incorporating product listings, comparison tables, and prices from Google's Shopping Graph. There are new interfaces, including a visual layout that uses photos and modules, and a dynamic view that uses agentic coding capabilities to create a custom user interface in real-time suited to a query.
[26]
Google Launches Gemini 3 Pro to Usher in a 'New Era of Intelligence'
Google announced on Tuesday the release of Gemini 3, the latest version of its flagship large language model, which it calls its smartest model yet, with a whole lot of benchmarks to back it up. The company said that Gemini 3 Pro will be available to users in preview across several Google products starting today, including being deployed in Search, and the "enhanced reasoning model" Gemini 3 Deep Think will start rolling out to Google AI Ultra subscribers after it completes safety testing. If you follow any AI-obsessed types on social media, you know Gemini 3 has been wildly anticipated by that bunch, who kept whispering about how the model would be a game-changer. Time will tell if it actually achieves that, but Google has a lot of metrics to show how the model is better on paper than previous iterations. The company bragged that it now tops the leaderboard of LMArena, a benchmarking tool used to compare LLMs, dethroning Grok 4.1 Thinking. Google also claims that Gemini 3 Pro demonstrates "PhD-level reasoning" on Humanity’s Last Exam and GPQA Diamond, and set a new record for mathematics performance on MathArena Apex. The scores, of course, climb even higher with Gemini 3 Deep Think. Does it matter that AI benchmarks are considered unreliable and misleading? Not if you're on the hype train, it doesn't. So Gemini 3 is smarter, which tends to be the primary upgrade that new LLM models bring. But Google is introducing some new capabilities with this model, too. The company announced Google Antigravity, an integrated development environment (IDE) for coding that has a built-in AI agent for maximizing your vibe coding. Basically, it's a way to more easily hand off coding tasks to AI, which can work across the editor, terminal, and browser based on the user's commands. While Gemini 3 Pro will be the default, the company said Antigravity also supports Claude Sonnet 4.5 and GPT-OSS agents. Antigravity will be available for testing starting today on Windows, Mac, and Linux. In addition to the newly launched Antigravity, developers will be able to access Gemini 3 Pro in Google's Vertex AI and AI Studio. The company also claims Gemini 3's coding benchmarks are world beaters, tooâ€"topping the WebDev Arena leaderboard and setting a new high on Terminal-Bench 2.0. Again, your mileage may vary as to whether those mean anything to you or if they're just numbers on a test. The big thing about Gemini 3 is that you're just not going to be able to avoid it. For the first time, the company is immediately integrating the model into Search. So any time you get an answer from AI Mode, it'll have run through Gemini 3. The Gemini app is where the model will primarily live, of course, and users will find what Google calls a “generative interface,†which offers two output modes called visual layout and dynamic view. The visual layout is a more traditional experience, pulling in images to the output based on the user's prompt. The dynamic view sees Gemini quickly whip up a website-style interface with functioning buttons and let you interact with different pages of information. Google's Gemini 3 is the latest model to join the game of oneupsmanship from the biggest players in the AI space. Earlier this month, OpenAI dropped its GPT-5.1 model, and on Monday, Elon Musk's xAI released Grok 4.1. It took all of 24 hours before Google grabbed the spotlight. We'll see how long the company can hold onto it before it's usurped by another model.
[27]
I used Google's new Gemini 3 AI to make Android apps and fine-tune my workouts
Google's new Gemini 3 AI model brings big improvements to reasoning and coding. We put it to the test. As we approach the end of the year, the companies at the forefront of AI are rushing to get their latest and greatest models into the hands of consumers, developers, and businesses. Last week, OpenAI unveiled ChatGPT 5.1, a smarter model with more personality. Not to be outdone by its rival, Google has unveiled Gemini 3, the latest version of its AI model that the company says has improved reasoning, coding, and multimodal capabilities.
[28]
'The leap is insane': Salesforce CEO swaps ChatGPT for Gemini 3 and says he's 'not going back'
Benioff praised Gemini 3's speed, reasoning, and multimodal capabilities as a major leap forward Salesforce CEO Marc Benioff has fallen head over heels for Gemini 3 and has publicly dumped ChatGPT in the process. His announcement left much of the AI world reeling. After just two hours playing with Gemini 3, a top enterprise software leader disavowed the most popular AI chatbot in favor of its rapidly ascending rival. The post wasn't subtle. It wasn't hedged with polite comparisons or neutral optimism. Not only has one of the most visible enterprise figures in Silicon Valley crossed from OpenAI to Google, but he's also made a point of telling the world. Gemini 3 has attracted plenty of other accolades for its blending of Google DeepMind's latest advances into a single unified architecture. Gemini 3 supports text, images, code, audio, and video in one interface. Gemini 3 is well-positioned to take ChatGPT's crown for go-to AI chatbot in many ways. Though ChatGPT's ubiquity made it feel inevitable, Benioff's post suggests that inevitability has an expiration date. In a matter of hours, one of its most prominent evangelists defected. That says more about the pace of AI evolution than any leaderboard or benchmark can. Benioff isn't just some casual early adopter. Salesforce has deeply integrated AI into its products and business strategy. It was among the first enterprise giants to partner with OpenAI for productivity tools embedded in customer relationship software. When Benioff makes a personal leap like this, it carries the weight of corporate alignment. And if Gemini 3 is now the model he's defaulting to, the software stacks around him may follow suit. Gemini 3's strengths seem to match what high-frequency users like Benioff look for in terms of speed, reasoning, and flexibility. Google has been clear that Gemini 3 was built to function as a flexible engine for both consumers and developers, able to power everything from helpdesk bots to video editing suggestions. That breadth may be what tipped the scales for Benioff, whose day likely involves toggling between datasets, dashboards, and vision decks at Silicon Valley warp speed. It's a good reminder that the AI landscape is getting more competitive. There's a growing chorus of contenders scrambling to make your AI assistant. For people who casually check in with ChatGPT every so often, nothing has to change overnight. But Benioff's public conversion offers a peek into how quickly even entrenched preferences can give way to the promise of sharper tools.
[29]
The balance of AI power tilts toward Google
Why it matters: Google was caught on the back foot when OpenAI released ChatGPT three years ago. With the release of, and rave reviews for, Gemini 3 Pro, the script has flipped. The big picture: Google's new Gemini 3 model is forcing a reckoning at OpenAI. * CEO Sam Altman told staffers to brace for "rough vibes" and "temporary economic headwinds" as the company works to catch up, per The Information. * Tech history is full of toppled incumbents -- Betamax, AltaVista, MySpace, Friendster -- but the AI race moves at a far faster clip. Today's leader could be tomorrow's laggard. * Even before Gemini 3, OpenAI was already confronting declining engagement, Sources.news reported, as content restrictions designed for user safety squeezed consumption. State of play: On Nov. 18, Google released Gemini 3 Pro, the latest version of its AI model that will power the company's core search engine and the Gemini app. * Analysts, users, and industry insiders say Gemini 3's superior benchmarks, integration into Google's ecosystem, and cost efficiencies are pressuring OpenAI, especially after GPT-5's underwhelming August release. * After spending two hours using Gemini 3, Salesforce CEO Marc Benioff posted on X: "I'm not going back. The leap is insane -- reasoning, speed, images, video... everything is sharper and faster. It feels like the world just changed, again." Reality check: Not everyone is as enamored with Gemini as Benioff. * "Google is unmatched at the data [it] can train on," Shanea Leven, ex-Googler and current CEO of Empromptu.ai, told Axios in an email. * This means the model is trained on a wider array of specialized topics. But when Gemini doesn't know about a topic, Leven says she finds it much more willing than ChatGPT-5 to hallucinate an answer. Zoom out: Generative AI arguably began with Google's 2017 Transformer paper. Much of the technology underlying OpenAI and Anthropic's models traces back to Google, and many current and former researchers at both companies started their careers there. * Google has nearly every structural advantage: vast revenue and cloud scale, plus the resources to distribute new AI features to billions of users overnight. * The company is also one of Nvidia's few real competitors in terms of creating its own chips. * The biggest surprise about Google's rapid gains is how long they took. Yes, but: OpenAI retains strong brand loyalty from a user base of around 800 million weekly active users. * OpenAI has been adding memory features to ChatGPT that allow it to give users answers customized to their preferences and prompt history. It's possible -- but not easy -- to export that history from ChatGPT and import it into Gemini. What we're watching: Gemini 3 now leads many benchmark tests and could extend that lead when Google's enhanced reasoning mode Gemini 3 Deep Think becomes widely available.
[30]
Gemini 3 launches: Everything new, how to try them
Google has officially launched Gemini 3, its newest and most advanced AI model -- and the company is calling it its "most intelligent" system yet. The model rolled out across Google products on day one, marking the first time a new Gemini release has gone live in Search at launch. Gemini 3 represents what Google execs describe as the next major step in their push toward more capable, more context-aware AI. In a note sharing the launch, the company stated that the model incorporates deeper reasoning, stronger multimodal understanding, and enhanced awareness of user intent compared to earlier versions. What's new in Gemini 3? The headline upgrade is reasoning. Google says Gemini 3 is built to understand nuance, break down complex problems, and "grasp depth and context" in ways its predecessors couldn't. In benchmarking tests, Gemini 3 Pro took the top spot on the LMArena leaderboard with a 1501 Elo score, outperforming both Grok and Gemini 2.5 Pro. The model also demonstrates significant improvements in academic-style reasoning. On Humanity's Last Exam, Gemini 3 Pro hit 37.5 percent without any external tools, and on GPQA Diamond, it reached 91.9 percent. It also posted state-of-the-art scores across math and multimodal tasks, including video understanding and factual accuracy. A new enhanced mode called Gemini 3 Deep Think pushes those capabilities even further. Deep Think improves scores on nearly every benchmark, including a jump to 41 percent on Humanity's Last Exam and 45.1 percent on the ARC-AGI-2 challenge. It's designed for the most challenging, long-horizon reasoning tasks. Where can you use Gemini 3? Gemini 3 is already live across the Google ecosystem. That includes: * AI Mode in Search, with more dynamic, generative interfaces and complex reasoning support. (Currently available for Google AI Pro and Ultra subscribers.) * The Gemini app is getting a redesign, including the My Stuff folder and improved shopping experiences. * Gemini for developers, available now in AI Studio, Vertex AI, Gemini CLI, and Google's new agentic development platform Antigravity. * Gemini Agent, an upgraded system that can complete multi-step tasks like inbox organization, appointment management, and more. How to try Gemini 3 right now If you want hands-on time with Gemini 3 immediately, there are a few pathways: * Use AI Mode in Search (Google AI Pro or Ultra subscription required). * Open the Gemini app, where Gemini 3 is the default model starting today for most users globally. * Developers can experiment with AI Studio or Vertex AI, gaining full access to Gemini 3 Pro. * Enterprise customers can deploy it through Vertex AI and Gemini Enterprise. Deep Think, the highest-end version, will be rolled out in the coming weeks following additional safety evaluations.
[31]
Google unveils Gemini 3 claiming the lead in math, science, multimodal, agentic AI benchmarks
After more than a month of rumors and feverish speculation -- including Polymarket wagering on the release date -- Google today unveiled Gemini 3, its newest proprietary frontier model family and the company's most comprehensive AI release since the Gemini line debuted in 2023. The models are proprietary (closed-source), available exclusively through Google products, developer platforms, and paid APIs, including Google AI Studio, Vertex AI, the Gemini CLI, and third-party integrations across the broader IDE ecosystem. Gemini 3 arrives as a full portfolio, including: * Gemini 3 Pro: the flagship frontier model * Gemini 3 Deep Think: an enhanced reasoning mode * Generative interface models powering Visual Layout and Dynamic View * Gemini Agent for multi-step task execution * Gemini 3 engine embedded in Google Antigravity, the company's new agent-first development environment. The launch represents one of Google's largest, most tightly coordinated model releases. Gemini 3 is shipping simultaneously across Google Search, the Gemini app, Google AI Studio, Vertex AI, and a range of developer tools. Executives emphasized that this integration reflects Google's control of TPU hardware, data center infrastructure, and consumer products. According to the company, the Gemini app now has more than 650 million monthly active users, more than 13 million developers build with Google's AI tools, and more than 2 billion monthly users engage with Gemini-powered AI Overviews in Search. At the center of the release is a shift toward agentic AI -- systems that plan, act, navigate interfaces, and coordinate tools, rather than just generating text. Gemini 3 is designed to translate high-level instructions into multi-step workflows across devices and applications, with the ability to generate functional interfaces, run tools, and manage complex tasks. Major Performance Gains Over Gemini 2.5 Pro Gemini 3 Pro introduces large gains over Gemini 2.5 Pro across reasoning, mathematics, multimodality, tool use, coding, and long-horizon planning. Google's benchmark disclosures show substantial improvements in many categories. In mathematical and scientific reasoning, Gemini 3 Pro scored 95 percent on AIME 2025 without tools and 100 percent with code execution, compared to 88 percent for its predecessor. On GPQA Diamond, it reached 91.9 percent, up from 86.4 percent. The model also recorded a major jump on MathArena Apex, reaching 23.4 percent versus 0.5 percent for Gemini 2.5 Pro, and delivered 31.1 percent on ARC-AGI-2 compared to 4.9 percent previously. Multimodal performance increased across the board. Gemini 3 Pro scored 81 percent on MMMU-Pro, up from 68 percent, and 87.6 percent on Video-MMMU, compared to 83.6 percent. Its result on ScreenSpot-Pro, a key benchmark for agentic computer use, rose from 11.4 percent to 72.7 percent. Document understanding and chart reasoning also improved. Coding and tool-use performance showed equally significant gains. The model's LiveCodeBench Pro score reached 2,439, up from 1,775. On Terminal-Bench 2.0 it achieved 54.2 percent versus 32.6 percent previously. SWE-Bench Verified, which measures agentic coding through structured fixes, increased from 59.6 percent to 76.2 percent. The model also posted 85.4 percent on t2-bench, up from 54.9 percent. Long-context and planning benchmarks indicate more stable multi-step behavior. Gemini 3 achieved 77 percent on MRCR v2 at 128k context (versus 58 percent) and 26.3 percent at 1 million tokens (versus 16.4 percent). Its Vending-Bench 2 score reached $5,478.16, compared to $573.64 for Gemini 2.5 Pro, reflecting stronger consistency during long-running decision processes. Language understanding scores improved on SimpleQA Verified (72.1 percent versus 54.5 percent), MMLU (91.8 percent versus 89.5 percent), and the FACTS Benchmark Suite (70.5 percent versus 63.4 percent), supporting more reliable fact-based work in regulated sectors. Generative Interfaces Move Gemini Beyond Text Gemini 3 introduces a new class of generative interface capabilities. Visual Layout produces structured, magazine-style pages with images, diagrams, and modules tailored to the query. Dynamic View generates functional interface components such as calculators, simulations, galleries, and interactive graphs. These experiences now appear in Google Search's AI Mode, enabling models to surface information in visual, interactive formats beyond static text. Google says the model analyzes user intent to construct the layout best suited to a task. In practice, this includes everything from automatically building diagrams for scientific concepts to generating custom UI components that respond to user input. Gemini Agent Introduces Multi-Step Workflow Automation Gemini Agent marks Google's effort to move beyond conversational assistance toward operational AI. The system coordinates multi-step tasks across tools like Gmail, Calendar, Canvas, and live browsing. It reviews inboxes, drafts replies, prepares plans, triages information, and reasons through complex workflows, while requiring user approval before performing sensitive actions. On the press call, Google said the agent is designed to handle multi-turn planning and tool-use sequences with consistency that was not feasible in earlier generations. It is rolling out first to Google AI Ultra subscribers in the Gemini app. Google Antigravity and Developer Toolchain Integration Antigravity is Google's new agent-first development environment designed around Gemini 3. Developers collaborate with agents across an editor, terminal, and browser. The system orchestrates full-stack tasks, including code generation, UI prototyping, debugging, live execution, and report generation. Across the broader developer ecosystem, Google AI Studio now includes a Build mode that automatically wires the right models and APIs to speed up AI-native app creation. Annotations support allows developers to attach prompts to UI elements for faster iteration. Spatial reasoning improvements enable agents to interpret mouse movements, screen annotations, and multi-window layouts to operate computer interfaces more effectively. Developers also gain new reasoning controls through "thinking level" and "model resolution" parameters in the Gemini API, along with stricter validation of thought signatures for multi-turn consistency. A hosted server-side bash tool supports secure, multi-language code generation and prototyping. Grounding with Google Search and URL context can now be combined to extract structured information for downstream tasks. Enterprise Impact and Adoption Enterprise teams gain multimodal understanding, agentic coding, and long-horizon planning needed for production use cases. The new model unifies analysis of documents, audio, video, workflows, and logs. Improvements in spatial and visual reasoning support robotics, autonomous systems, and scenarios requiring navigation of screens and applications. High-frame-rate video understanding helps developers detect events in fast-moving environments. Gemini 3's structured document understanding capabilities support legal review, complex form processing, and regulated workflows. Its ability to generate functional interfaces and prototypes with minimal prompting reduces engineering cycles. In addition, the gains in system reliability, tool-calling stability, and context retention make multi-step planning viable for operations like financial forecasting, customer support automation, supply chain modeling, and predictive maintenance. Developer and API Pricing Google has disclosed initial API pricing for Gemini 3 Pro. In preview, the model is priced at $2 per million input tokens and $12 per million output tokens for prompts up to 200,000 tokens in Google AI Studio and Vertex AI. Gemini 3 Pro is also available at no charge with rate limits in Google AI Studio for experimentation. The company has not yet announced pricing for Gemini 3 Deep Think, extended context windows, generative interfaces, or tool invocation. Enterprises planning deployment at scale will require these details to estimate operational costs. Multimodal, Visual, and Spatial Reasoning Enhancements Gemini 3's improvements in embodied and spatial reasoning support pointing and trajectory prediction, task progression, and complex screen parsing. These capabilities extend to desktop and mobile environments, enabling agents to interpret screen elements, respond to on-screen context, and unlock new forms of computer-use automation. The model also delivers improved video reasoning with high-frame-rate understanding for analyzing fast-moving scenes, along with long-context video recall for synthesizing narratives across hours of footage. Google's examples show the model generating full interactive demo apps directly from prompts, illustrating the depth of multimodal and agentic integration. Vibe Coding and Agentic Code Generation Gemini 3 advances Google's concept of "vibe coding," where natural language acts as the primary syntax. The model can translate high-level ideas into full applications with a single prompt, handling multi-step planning, code generation, and visual design. Enterprise partners like Figma, JetBrains, Cursor, Replit, and Cline report stronger instruction following, more stable agentic operation, and better long-context code manipulation compared to prior models. Rumors and Rumblings In the weeks leading up to the announcement, X became a hub of speculation about Gemini 3. Well-known accounts such as @slow_developer suggested internal builds were significantly ahead of Gemini 2.5 Pro and likely exceeded competitor performance in reasoning and tool use. Others, including @synthwavedd and @VraserX, noted mixed behavior in early checkpoints but acknowledged Google's advantage in TPU hardware and training data. Viral clips from users like @lepadphone and @StijnSmits showed the model generating websites, animations, and UI layouts from single prompts, adding to the momentum. Prediction markets on Polymarket amplified the speculation. Whale accounts drove the odds of a mid-November release sharply upward, prompting widespread debate about insider activity. A temporary dip during a global Cloudflare outage became a moment of humor and conspiracy before odds surged again. The key moment came when users including @cheatyyyy shared what appeared to be an internal model-card benchmark table for Gemini 3 Pro. The image circulated rapidly, with commentary from figures like @deedydas and @kimmonismus arguing the numbers suggested a significant lead. When Google published the official benchmarks, they matched the leaked table exactly, confirming the document's authenticity. By launch day, enthusiasm reached a peak. A brief "Geminiii" post from Sundar Pichai triggered widespread attention, and early testers quickly shared real examples of Gemini 3 generating interfaces, full apps, and complex visual designs. While some concerns about pricing and efficiency appeared, the dominant sentiment framed the launch as a turning point for Google and a display of its full-stack AI capabilities. Safety and Evaluation Google says Gemini 3 is its most secure model yet, with reduced sycophancy, stronger prompt-injection resistance, and better protection against misuse. The company partnered with external groups, including Apollo and Vaultis, and conducted evaluations using its Frontier Safety Framework. Deployment Across Google Products Gemini 3 is available across Google Search AI Mode, the Gemini app, Google AI Studio, Vertex AI, the Gemini CLI, and Google's new agentic development platform, Antigravity. Google says additional Gemini 3 variants will arrive later. Conclusion Gemini 3 represents Google's largest step forward in reasoning, multimodality, enterprise reliability, and agentic capabilities. The model's performance gains over Gemini 2.5 Pro are substantial across mathematical reasoning, vision, coding, and planning. Generative interfaces, Gemini Agent, and Antigravity demonstrate a shift toward systems that not only respond to prompts but plan tasks, construct interfaces, and coordinate tools. Combined with an unusually intense hype and leak cycle, the launch marks a significant moment in the AI landscape as Google moves aggressively to expand its presence across both consumer-facing and enterprise-facing AI workflows.
[32]
'I'm not going back': Billionaire Marc Benioff says he's switching to Google's Gemini 3 after using 'ChatGPT every day for three years' | Fortune
"Holy s -- . I've used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I'm not going back," Benioff wrote on X. "The leap is insane -- reasoning, speed, images, video... everything is sharper and faster. It feels like the world just changed, again." Benioff, who has 1.1 million followers on X, had more than 3.2 million people see his post by Tuesday morning, according to the social network. His endorsement is notable not just because of his prominence -- his net worth is currently around $8.5 billion, according to Forbes -- but because of his company's extensive partnerships across the AI landscape. Just last month, Salesforce announced an expanded strategic partnership with OpenAI, integrating its Agentforce 360 platform with ChatGPT and allowing enterprises to use OpenAI's GPT-5 models within Salesforce products. At the time, Benioff praised that collaboration. "As consumers, we already get instant recommendations or insights from ChatGPT," he said in October. "Now enterprises can deliver that same intelligence and immediacy." The swift about-face from ChatGPT to Google's Gemini underscores how quickly things can change as AI models leapfrog one another in capability on a near-monthly basis. Google and its DeepMind division released Gemini 3 last week, describing it as the company's "most intelligent model, which "combines all of Gemini's capabilities together so you can bring any idea to life." The model almost immediately topped the LMArena leaderboard, a crowdsourced benchmark that evaluates AI systems on reasoning, coding, writing, and factual accuracy. Leaders across Silicon Valley took notice of Gemini's big launch. OpenAI CEO Sam Altman congratulated Google on X, saying: "Congrats to Google on Gemini 3! Looks like a great model." Andrej Karpathy, one of OpenAI's co-founder and a former AI director at Tesla, said on X he had "a positive early impression" of the model, calling it "very solid daily driver potential" and "clearly a tier 1 LLM." Stripe CEO Patrick Collison posted on X that Gemini 3 successfully built him an "interactive web page summarizing 10 breakthroughs in genetics," which he called "pretty cool." But behind the public compliments, there are signs of concern at OpenAI. In an internal memo written before Gemini 3's release and obtained by The Information, Altman told employees to expect "rough vibes," adding "by all accounts, Google has been doing excellent work recently." He said Google's progress could "create some temporary economic headwinds for our company," but insisted OpenAI is "catching up fast." There's been a flurry of new models in the AI sector recently. OpenAI released GPT-5.1 less than a week before Google debuted Gemini 3, and Anthropic just launched its Claude Opus 4.5 model on Monday.
[33]
Gemini 3 is live and ready to show the next leap in AI
What's happened? Google has announced the rollout of Gemini 3, stating it is its most capable and intelligent AI model yet. The model handles text, images, and audio simultaneously, meaning you could show a photo, ask about it, and hear or read a detailed answer, all in one go. It's also available immediately in the Gemini app for Pro users and is being integrated into Google Search. * Gemini 3 Pro is described by Google as "natively multimodal," supporting tasks like turning recipe photos into full cookbooks or generating interactive study tools from video lectures. * Google says the model has improved reasoning capabilities, better task-planning, and reduced "sycophancy" (i.e., less flattery and more direct answers) compared to past versions. * The launch comes with new tools like Google's Antigravity coding platform, which uses Gemini 3 Pro to automate workflows and document every step via artifacts. Why this is important: This launch signals a major shift in how we might interact with AI. With Gemini 3's multimodal ability, you're no longer limited to typing questions. Instead, you can show images, talk to it, and play audio, all in the same session. That opens doors for smarter assistants, better content generation, and workflows that really fit how we think and work. For developers, businesses, and Google itself, this model sets the stage for a new wave of AI-powered tools. Recommended Videos If Gemini 3 works well in real-world use, it could redefine expectations around virtual assistants, creative tools, and search itself. Moreover, by reducing errors, improving reasoning, and integrating across tools (like Search and coding environments), Google is positioning AI not just as an assistant, but as something proactively helpful. That means the AI you engage with could become more capable, contextual, and tailored to you. Why should I care? If you use AI tools, create digital content, or rely on search and productivity apps, Gemini 3 could noticeably shift your day-to-day experience. It's not just a speed bump; it's a broader upgrade in what Google's AI can understand and produce. * Better answers: With stronger reasoning and multimodal input handling, interactions can feel quicker, more natural, and more accurate. * Smarter workflows: Whether coding, researching, or working on creative projects, the tools around you may feel smoother and more capable, cutting down on the small frustrations that slow you down. * Platform shifts: As Google weaves Gemini 3 deeper into Search, Workspace, and other apps, expect familiar features to quietly evolve, even if you don't notice the change right away. In short, even if you don't "see" Gemini 3 directly, you'll likely feel its influence as it becomes the engine behind more of Google's ecosystem. It builds on Gemini 2.5's foundation but with sharper reasoning, better instruction-following, and more stable multimodal performance. Tasks that previously tripped up 2.5, like maintaining context or juggling multiple images, are handled more smoothly here, resulting in an upgrade that feels less like a version step and more like Google redefining how its AI assistant should behave. And here's where things get even more interesting: Gemini 3 Pro isn't just better on paper, but it's also scoring noticeably higher across various AI benchmarks. These gains show up in areas like long-form reasoning, code generation, and complex multimodal tasks. In real use, that translates to fewer moments where the model loses track of what you're asking, a higher chance of getting the answer you actually wanted, and more stable performance when juggling multiple files, images, or steps. Okay, so what's next? If you're using the free version, you can start experimenting with Gemini 3 today, as it's already live across the Gemini app and in AI Mode in Search. This means you can test its improved reasoning, multimodal input (text, images, etc.), and more intuitive prompts to see how it works for your daily tasks. For Pro (and Ultra) users, there's more in store: you'll get access to the full Gemini 3 Pro model's advanced capabilities (stronger reasoning, deeper context handling, richer multimodal responses) and soon the new "Deep Think" mode which is designed for the most complex workflows. All of this means that if you upgrade, you'll experience a higher-tier version of Gemini that's more powerful and responsive.
[34]
Google launches Gemini 3 with state-of-the-art reasoning, 'generative UI' for responses, more
Google today announced Gemini 3 with the goal of bringing "any idea to life." The first model available in this family is Gemini 3 Pro with the rollout starting today for the Gemini app and AI Mode. With Gemini 1.0, Google focused on native multimodality and the long context window. A year later Gemini 2.0 brought advanced reasoning and the beginning of agentic capabilities, while Gemini 2.5 introduced deep reasoning and coding capabilities. Gemini 3 -- which drops the ".0" -- is Google's "most intelligent model" and positioned as helping you "bring any idea to life." It starts by getting better at figuring out the context and intent of your request, so that "you get what you need with less prompting." Gemini 3 is state-of-the-art in reasoning with the ability to "grasp depth and nuance," like "perceiving the subtle clues in a creative idea, or peeling apart the overlapping layers of a difficult problem." Gemini 3 Pro responses aim to be "smart, concise, and direct, trading cliche and flattery for genuine insight." It acts as a true thought partner that gives you new ways to understand information and express yourself, from translating dense scientific concepts by generating code for high-fidelity visualizations to creative brainstorming. Gemini 3 Pro has a score of 1501 on LMArena and surpasses 2.5 Pro (1451), which still had the top position. It outperforms the model its replacing in all major benchmarks by a significant margin: This means Gemini 3 Pro is highly capable at solving complex problems across a vast array of topics like science and mathematics with a high-degree of reliability. Google today also announced the Gemini 3 Deep Think mode with even better reasoning and multimodal understanding. It outperforms Gemini 3 Pro on Humanity's Last Exam (41.0% without the use of tools) and GPQA Diamond (93.8%). This will be available in the coming weeks for AI Ultra subscribers. It also achieves an unprecedented 45.1% on ARC-AGI (with code execution), demonstrating its ability to solve novel challenges. Gemini 3 makes possible generative UI (or generative interfaces) wherein LLMs generate both content and entire user experiences. This includes web pages, games, tools, and applications that are "automatically designed and fully customized in response to any question, instruction, or prompt. This work represents a first step toward fully Al-generated user experiences, where users automatically get dynamic interfaces tailored to their needs, rather than having to select from an existing catalog of applications. Behind-the-scenes, Gemini 3 Pro leverages tool access like web search and image generation, as well as "carefully crafted system instructions." The system is guided by detailed instructions that include the goal, planning, examples and technical specifications, including formatting, tool manuals, and tips for avoiding common errors. Finally, the output is sent through post-processors that address "potential common issues." This is launching today in the Gemini app as experiments. Dynamic view sees Gemini 3 design and code a "fully customized interactive response for each prompt." It customizes the experience with an understanding that explaining the microbiome to a 5 year old requires different content and a different set of features than explaining it to an adult, just as creating a gallery of social media posts for a business requires a completely different interface to generating a plan for an upcoming trip. Visual layout is the second experiment and creates an "immersive, magazine-style view complete with photos and modules." The main difference to dynamic view is how Gemini will generate sliders, checkboxes, and other filters that let you customize the results further. You might initially only see one of these experiments at a time to allow Google to gather feedback. For more on what Gemini 3 brings to the Gemini app (including Gemini Agent), read our story here. Meanwhile, this is the first time that a new model is coming to Google Search and AI Mode alongside the Gemini app. Starting this week, AI Pro and AI Ultra subscribers can go to the dropdown menu in the top-left corner and select "Thinking: 3 Pro reasoning and generative layouts." With Gemini 3, Google's query fan-out technique can perform additional searches than before that ask more nuanced questions to improve the final response you get. AI Mode will also create generative UIs to creative interactive tools and simulations. For example, Google might build a mortgage calculator that lets you change interest rates and down payment. Another is getting a physics simulation when you're learning about topics. Gemini 3 will next come to all (free) AI Mode users in the US, with subscribers getting higher limits. Looking ahead, Google in the coming weeks will update Search's automatic model selection for subscribers to send challenging questions to Gemini 3 "while continuing to use faster models for simple tasks." With Gemini 3, Google Antigravity was announced as a new agentic development platform that allows developers to "operate at a higher, task-oriented level." This IDE sees agents work across the editor, terminal, and browser. Available now on Mac, Windows, and Linux, it uses Gemini 3, Gemini 2.5 Computer Use, and Nano Banana.
[35]
Gemini 3 brings upgraded smarts and new capabilities to the Gemini app
Gemini Agent: A new tool that orchestrates and completes complex, multi-step tasks on your behalf, rolling out to Google AI Ultra members first. Gemini 3 sets a new bar for AI model performance. You'll notice the responses are more helpful, better formatted and more concise. But it's not just the response formatting that's improved. The entire experience has gotten smarter. It's our best vibe coding model ever, so the apps you build in Canvas will be more full-featured. It's the best model in the world for multimodal understanding, so whether you're uploading a photo of your homework to ask for extra help, or transcribing notes from a lecture you missed, the Gemini app is ready. Gemini 3 Pro is rolling out globally starting today. To use it, simply select "Thinking" from the model selector. Our Google AI Plus, Pro and Ultra subscribers will continue to enjoy higher limits. We're also extending our free year of Google AI Pro to U.S. college students, ensuring they have access to the best of Google AI, including Gemini 3.
[36]
Google Releases Its Most Powerful AI Model, Gemini 3 -- Here's What You Need to Know - Decrypt
Google released Gemini 3 Pro in a public preview today, calling it the company's most capable AI model to date. The system handles text, images, audio, and video simultaneously while processing up to 1 million tokens of context -- roughly equivalent to 700,000 words, or about 10 full-length novels. The preview model is available for free for anyone to try here. Google said the model outperformed its predecessor, Gemini 2.5 Pro, across nearly every benchmark the company tested. On Humanity's Last Exam, an academic reasoning test, Gemini 3 Pro scored 37.5% compared to 2.5 Pro's 21.6%. On ARC-AGI-2, a visual reasoning puzzle benchmark, the gap widened further: 31.1% versus 4.9%. Of course, the real challenge at this point in the AI race isn't technical so much as it is gaining commercial market share. Google, which once seemed indomitable in the search space, has given up an enormous amount of ground to OpenAI, which claims some 800 million weekly users ChatGPT versus Gemini, which reportedly has around 650 million monthly users. Google has not said how many weekly numbers it has, but that would be far fewer than its monthly count. Still, the technical achievements of Gemini 3 are impressive. Gemini 3 Pro uses what Google calls a sparse mixture-of-experts architecture. Instead of activating all 1 trillion-plus parameters for every query, the system routes each input to specialized subnetworks. Only a fraction of the model -- the expert at that specific task -- runs at any given time, cutting computational costs while maintaining performance. Unlike GPT and Claude, which are large, dense models (a jack of all trades), Google's approach acts like a large organization would operate. A company with 1,000 employees doesn't call everyone to every meeting; specific teams handle specific problems. Gemini 3 Pro works the same way, directing questions to the right expert networks. Google trained the model on web documents, code repositories, images, audio files, and video -- plus synthetic data generated by other AI systems. The company filtered the training data for quality and safety, removing pornographic content, violent material, and anything violating child safety laws. Training happened on Google's Tensor Processing Units using JAX and ML Pathways software. A quick test of the model showed that it was very capable. In our usual coding test asking to generate a stealth game, this was the first model that generated a 3D game instead of a 2D experience. Other runs provided 2D versions, but all were functional and fast. This approach follows the style of ChatGPT or Perplexity which encourage further interactions by sharing follow-up questions and suggestions, but Google's implementation is a lot cleaner and more helpful. While generating code, the interface provides tips to help in subsequent prompts, so the user can guide the model into generating better code, fixing bugs, and improving the app's logic, UI, etc. It also gives users the option to deploy their code and code Gemini-powered apps. Overall, this model seems to be especially focused on coding tasks. Creativity is not its strong point, but it can be easy to guide with a good system prompt and examples, as it has a very large token context window. An archived version of Gemini 3's model card -- a document that provides essential information about the model's design, intended use, performance, and limitations -- published by Google DeepMind shows that Gemini 3 Pro can generate up to 64,000 tokens of output and maintains a knowledge cutoff of January 2025. Google acknowledged the model may hallucinate and occasionally experiences slowness or timeouts. An official model card is not currently available. As mentioned, Google AI Studio is currently offering everyone free access to Gemini 3 Pro. Vertex AI and the Gemini API also support the model. Gemini 3 Pro is not yet available through the Gemini app, however -- not even for paying Gemini Pro subscribers. The November release positions Google against Anthropic's Claude Sonnet 4.5, Grok 4.1 and even OpenAI's GPT-5.1. Benchmark scores suggest Gemini 3 Pro leads in reasoning and multimodal tasks, though real-world performance varies by use case. Google distributed Gemini 3 Pro through its cloud platforms subject to existing terms of service. The company's generative AI prohibited use policy applies, blocking use in dangerous activities, security compromises, sexually explicit content, violence, hate speech, and misinformation.
[37]
Gemini 3 can do way more than ChatGPT in Search -- here's how to use it right now
After a lot of rumors and hints, Google's Gemini 3 finally dropped this week, bringing huge competition to ChatGPT. This state-of-the-art AI model has brought huge improvements across a variety of Google tools, including the Gemini chatbot itself. But one of the most interesting changes is in Google search. Head over to the AI mode found in Google Search, and you'll find some new changes to make searching smarter than ever. "Gemini 3 brings incredible reasoning power to Search because it's built to grasp unprecedented depth and nuance for your hardest questions," Elizabeth Hamon Reid, VP of Engineering for Google Search, said, announcing the news. "It also unlocks new generative UI experiences so you can get dynamic visual layouts with interactive tools and simulations -- generated specifically for you." So, what does that all look like in practice? AI mode is a fairly new feature for Google, rolling out over the past couple of months. However, it is very simple to use. On your desktop, head to Google Search and you'll see an AI mode button at the end of the search bar. Clicking this will put you into the mode, where you can search in the same way that you would on Google, but with some AI-powered assistance. AI mode can also be found at the top of Google when looking through search results. For now, Gemini 3's rollout in search is limited. The update is only available for Google AI Pro and Ultra subscribers in the U.S. However, Google will soon roll this out to everyone in the U.S., and paid subscribers will have higher usage limits. Users will also gain access to a 'Thinking' setting on AI mode. Through this, Gemini 3 is able to tackle more complicated questions, learning to respond to your intent and the nuance of your request. This will be music to the ears of those who felt like Google's AI mode and overview were lacking in understanding, often missing obvious nuance in a question. Using Gemini 3's advanced reasoning, Google search is getting a major upgrade to its query fan-out technique. This essentially means Google AI mode is delving into more sources, bringing you information from a far wider array of places. In the coming weeks, Google will also be enhancing its automatic model selection. This means Google AI mode will assess your query and utilize the best model for the complexity. More powerful models will be used for challenging tasks, whilst quicker ones will pick up easier requests. Advancements in AI mode are exciting, but it's the rollout of Google's new generative UI features that are more unique. Google will now be able to use its agentic coding abilities to create clever visual layouts, including interactive tools and simulations. For example, if you ask Google to explain a complicated scientific theory, it can provide a worded answer, along with an interactive game that lets you explore the theory yourself. Google will scan your request and automatically apply this feature when it seems relevant. Another example Google gives is the addition of a mortgage calculator when you ask questions relevant to buying a house. Some version of this has been available for years with Google, but in a much more simplified form. For example, when you ask about currency conversion, Google will often provide a calculator, or asking for flights can bring up an interactive data selector. The goal here, Google states, is to provide further tools when they can better explain a concept, utilising generative AI to make something new each time.
[38]
Google's New Gemini Pro Features Are Out, but Most of Them Will Cost You
Free users have access to limited daily generations in the Gemini app or on the Gemini website. Google has officially launched its Gemini 3 Pro model, and for the first time, it's already making its way directly to the public, without making you wait months after the announcement to actually get your hands on it. On top of introducing a few new features, Google says the model has the typical increased accuracy, but also, that it finally cuts down on some of the excessive people-pleasing flattery that drives me nuts when using AI. The catch? Most features are paywalled, and one is still in the oven. For the more obvious improvements, Google says Gemini 3 Pro is at the top of LMArena with a score of 1501 points, while also demonstrating "PhD-level reasoning" on Humanity's Last Exam, earning a 37.5% score without the use of tools. Math buffs will also be happy to hear that the new model scored 91.9% on GPQA Diamond and 23.4% on MathArena Apex. But for everyone else, the real excitement comes from the brand new things you can do in Gemini 3 Pro. This isn't just a performance bump. Instead of just being Gemini 2.5 but better, Gemini 3 Pro also debuts three new consumer-facing features, plus a new platform for developers. It's a much meatier release than, say, ChatGPT 5.1 -- assuming you're willing to pay up. Google's press release bounces back and forth between calling this feature "Generative UI" and "Generative Interfaces," although I prefer the former. Essentially, it's supposed to make your AI results easier to read. It's one of the few new features that's freely open to everyone, although Google has two different approaches to it, and not everyone might see the same one. The first is called "visual layout," and is more similar to Gemini's current results pages. Essentially, when you enter a complex, multi-layered prompt, like "plan a three-day trip to Rome next summer," you'll now get an explorable visual itinerary, rather than static text. This will include photos and clickable modules, but remain within the Gemini interface you've gotten used to. It may also have interactive sliders and buttons for further refining your search, but Google says the idea is to give you an "immersive, magazine-style view." The second, then, is more like an on-demand webpage. It's called "dynamic view," and basically uses agentic coding to generate an on-the-fly app to help you learn more about a topic. This will include generated text and imagery, but may appear quite different from the Gemini interface you've gotten used to. An example in the press release sees Google generating a dynamic view response to "explain the Van Gogh Gallery with life context for each piece," which creates a scrollable page with a clickable header, the art justified on the left, and scrollable text on the right with clickable subheadings and pull quotes, all with custom font and design that can differ pretty wildly from Gemini's other output. One of the issues I've had with lengthy plain-text AI responses is that they can get a bit tiresome to skim, and Google's hoping this will help deal with that. Still, it hasn't settled on an approach. Both visual layout and dynamic view are rolling out today, to free and paying customers, but the company says that "to help us compare these experiments, you may initially see only one of them." As a contrast to Generative UI, Gemini 3 Deep Think is behind a hefty paywall, and still in the oven. The feature is an evolution on the existing Gemini 2.5 Deep Think mode, and essentially allows the AI to take more time to answer a question so it can better reason out an appropriate response. It's similar to existing free deep research modes across Gemini and other AI apps, but more broadly applicable, and Google says it's great for use cases like intricate graphic design or coding. Unfortunately, it's only in the hands of a few "safety testers" for now. Google told press it will start to release "in the coming weeks," but even then, it'll be limited to Google AI Ultra subscribers. Since Google AI Ultra is $250/month (although it starts at $125/month for the first three months), that's a pricey proposition. Another Google AI Ultra-exclusive feature, Gemini Agent will start rolling out to subscribers today, and is similar to the company's recent AI shopping push. The idea is for it to take action for you, to help you handle multi-step tasks without having to leave Gemini. The key thing is that it works with Google's other apps. So, for instance, you could ask it to help organize your Gmail inbox, and it'll separate out your pending emails into categories and submit emails it thinks you could delete for your approval. Or, you could ask it to help you rent a car under a certain budget for an upcoming trip, and it'll scan your emails for flight and hotel details, then search for an appropriate booking and reach out to you before finalizing it. It's got access to the web, Google Workspace, and other AI tools like Canvas, so in theory, it can pull from pretty much every resource Google has to help you answer your question. Google also says it will "seek confirmation before critical actions like making purchases or sending messages, and you can take over any time." If it works, it sounds like the type of virtual secretary most people probably thought of when these companies started talking about AI just a few years ago. But with such a high monthly price tag, it's probably only for the highest of rollers for now. So, kind of like a regular secretary, I guess. Finally, this feature is intended more for developers than the average internet user, but it's worth bringing up, if only because it's free. Called Google Antigravity, it's a new development platform focusing on agentic coding, AKA having the AI generate code for you. It's more complicated than that -- you're able to freely browse and edit generated code -- but the idea is to make it easier to use Gemini as a development partner. It's not Gemini's first development tool, but the idea is to give developers a dedicated AI workspace. To that end, Antigravity can pull from existing features like Canvas, as well as implement "browser control capabilities" and "asynchronous interaction patterns" to achieve an "agent-first product form factor" that can "autonomously plan and execute complex, end-to-end software tasks." I'm sure the people who would use Antigravity will know what all that means. To a layperson like me, it seems like the big improvement is that it's one app that you can use to go straight from ideation to publishing, rather than having to bounce between several different Gemini tools. What's probably more interesting, though, is that Antigravity is free, "with generous rate limits on Gemini 3 Pro usage." Yep, developing with Gemini is actually now cheaper than using Gemini, depending on what you want to do. At least some of Gemini 3 Pro is available to everyone. To try Gemini 3 Pro for yourself, open the Gemini app or webpage and select "Thinking" from the model selector underneath your prompt. Google told me free users will have up to five Gemini 3 Pro prompts per day, while AI Plus, Pro, and Ultra subscribers will "enjoy higher limits." Google AI Pro and Ultra subscribers will also be able to use Gemini 3 Pro right from Google search's AI Mode, also by selecting "Thinking" from the model selector (next to the "AI Mode" button). Free users will have to stick to the Gemini app for now, unfortunately, but Google says that will change "soon." On the plus side, AI Mode will still be able to generate visual layouts and dynamic views, assuming you have access to Gemini 3 Pro within it. AI Overviews will also start to use Gemini 3 Pro for AI Pro and Ultra subscribers, although that's set for "the coming weeks." When it arrives, AI Mode will also be upgraded to automatically send your hardest questions to Gemini 3 Pro, without you having to pick it in the model selector (although you can continue to manually pick older models if you prefer). Personally, I'm glad to see Google releasing Gemini 3 Pro alongside concrete new features, instead of just making "AI, but better." At the same time, because most of this requires a subscription, it's clear we're still a little while away from that wide AI adoption the tech industry still seems to be clamoring for.
[39]
Gemini 3 gets the power to shape Search results for maximum impact
Access gets started with AI Pro and AI Ultra subscribers in the US. Google's latest AI advancements are making their debut today, with Gemini 3 arriving to showcase its next-gen reasoning and agentic skills. We're getting our first look at those upgrades across the many Google services that already tap into Gemini, and for many of us the one we're going to be interacting with the most has got to be Search. With Gemini 3, Google Search promises better performance, deeper, more targeted results, and a visual overhaul to how it presents information.
[40]
Google Gemini 3 has dropped - here are 6 prompts that show what it can do
This article is part of TechRadar's AI Week 2025. Covering the basics of artificial intelligence, we'll show you how to get the most from the likes of ChatGPT, Gemini, or Claude, alongside in-depth features, news, and the main talking points in the world of AI. Google has officially launched Gemini 3, the latest version of its AI model. We've been testing it and our first impression is that it's a big leap forward for the company's AI tools. This update brings sharper reasoning, stronger multimodal understanding, and a whole raft of features that reshape how Google's apps, Search, and the Gemini assistant will work day to day. Alongside the new models, Google has also introduced new AI interfaces and capabilities across its ecosystem. From upgraded Search results in AI Mode to more powerful creative, planning, and research tools inside the Gemini app. It's a big step toward Google's vision of a genuinely useful AI layer woven into everything you use. How you prompt with Gemini has changed too. Google says Gemini 3 is now far better at interpreting messy instructions and breaking down complex tasks on its own. But a few clever prompting techniques still make a huge difference in the results you get. Below, we've rounded up six simple prompts straight from Google's announcement that show what Gemini 3 can do and inspire you to create your own. Gemini 3 is Google's newest generation of AI models. It's one of the biggest upgrades the company has made to its AI offering. You'll see it across multiple products immediately, like the Gemini app, Google Search's AI Mode, NotebookLM and Google's developer platforms. But it's going to be most noticeable inside Google's main AI assistant, Gemini. Gemini 3 brings significant improvements in reasoning, accuracy, and multimodal capability. Which means a whole bunch of things, including that it can process and understand longer, more complex inputs. It can also break big problems into smaller steps, analyze images and videos with more context, and deliver more reliable results. There are two parts of Gemini 3 you need to know about. Gemini 3 Pro is the main model. The one that powers the Gemini app, Google Search's AI Mode, and Google's more consumer-facing features. There's also Gemini 3 Deep Think, a reasoning mode designed for tougher, multi-step problems. It's currently in testing and will be aimed at advanced users. Google has also introduced new capabilities built around the model, including more agentic behaviour. Which means Gemini will be able to plan multi-step actions, handle longer workflows, and operate with more autonomy when asked. Prompt: Help me plan a 3 day trip to Rome next summer. Gemini 3 has added 'generative interfaces'. They're essentially magazine-style summaries complete with pictures, photos and modules. The example Google gives is asking Gemini to plan a trip and it then serving up an itinerary in this style, which we think looks great. But it'll also aim to find out more about what sort of pace you want from your trip, the kinds of sights you want to see, and other details before it creates it. In Google's screenshot above, you can see Gemini hasn't just served up general information about Rome, but has tailored it to the user, calling it an "Art Pilgrimage" with specific recommendations. Prompt: Create a Van Gogh gallery with real life context for each piece. In the trip planning example above, we saw what Gemini can do with a 'generative interface' in a visual layout. But it can also create a dynamic layout. So rather than a nice-looking magazine, think a custom interface that you can actually interact with. We've included Google's prompt above, but you can get creative here as a way to really accelerate your learning about a subject. This approach actually uses Gemini 3's agentic capabilities, meaning rather than you having to tell it do loads of different things, it figures out all of the steps itself needed to get something done. Prompt: Help me understand solar eclipses with diagrams and interactive sections This isn't an official Google example, it's a use case for Gemini's dynamic layouts that I think will be incredibly helpful, especially for students, researchers, or anyone who learns visually. The Van Gogh demo is impressive, but the real power of this feature will shine with complex concepts and hard-to-visualize lessons. Physics, space, scientific theory, these are the kinds of topics people will want transformed into a dynamic, interactive interface they can actually explore. Prompt: What are the parts of a plant cell? Where the previous example focuses on visualizing big, abstract concepts, this one from Google's demo shows how Gemini 3 can help with detailed, structured topics that require clear, interactive diagrams and images. Ask Gemini to build an interactive layout that includes a labelled diagram of a plant cell that's fully interactive. So you can zoom in to view each component and see explanations of what each part does. It's perfect for students, teachers, or anyone learning biology who benefits from clean, accurate visuals and bite-sized definitions. Prompt: Help me organize my inbox Another prompt directly from Google's demos that looks achingly simple but yields great results. Now, the key to this one is you need to go and select 'Agent' from the dropdown menu in Gemini. Gemini Agent is still described by Google as an "experimental feature" but it's very cool because it basically takes a task and breaks it down into a whole bunch of steps before beginning on them all. You'll get the best organization benefits if you ensure it's connect to your Google apps. That way it can wrangle your inbox, manage your calendar, add reminders and loads more. Prompt: Research and help me book a mid-size SUV for my trip next week under $80/day using details from my email. We like this prompt suggestion from Google because it shows that Gemini's agentic AI capabilities aren't just about making you more productive with work. They can help you with all sorts of overwhelming logistical planning in your personal life, too. With just a simple prompt about a trip, for example, Gemini can carry out a whole bunch of tasks right away, like locating travel information, comparing rentals within a budget and preparing a booking. It's similar to the trip planning visual layout above, but it's less "here's what you could do" and more "let's get stuff done."
[41]
Gemini 3 Pro powers up Google's AI ambitions
Why it matters: The rollout builds on Google's recent momentum, including reports that a future custom model could power Apple's Siri. * Google says Gemini 3 Pro sets new high scores on a variety of existing benchmarks, is capable of more nuanced and complex answers and generates interactive graphics in response to prompts. Driving the news: Gemini 3 Pro will roll out starting today in the Gemini app. * Gemini 3 Pro will also start powering AI Mode in search, the first time that a new model has been integrated with search at launch. * Developers will have access to Gemini 3 Pro via Google's Vertex AI and AI Studio. * Google is also introducing a new agentic tool called Antigravity that lets developers describe apps more conceptually, instead of coding it line by line. * A more advanced Gemini 3 Deep Think will remain in testing before launching first for AI Ultra subscribers. The big picture: Gemini 3 arrives amid fierce competition among the leading AI players. OpenAI released GPT-5 in August and updated it last week to 5.1, with additional personality options. * Elon Musk's xAI released Grok 4.1 on Monday, saying the update is far less likely to hallucinate than previous versions. Between the lines: The Gemini 3 rollout came later than some expected. Google said it wanted to give itself and its partners more time to test and integrate the model. * "One of the things we really wanted to optimize for with this model is putting it out broadly," Google senior director Tulsee Doshi told Axios. What we're watching: Doshi said to expect future versions of Gemini 3 optimized for running locally or delivering results that are faster and more cost-efficient.
[42]
Google launches Gemini 3, its most intelligent model ever
On Tuesday, the company launched Gemini 3, calling it its "most intelligent model." The new LLM (large language model) should have improved reasoning, helping it to understand the depth and nuance of the tasks it tackles. But according to Google, it should also be better at figuring out "the context and intent behind your request," which means you should be able to get things done with less prompts. Gemini hasn't only been updated on its own. Google said it shipped the LLM at scale across its products, meaning that Gemini 3 is already live in AI mode in Google Search -- the first time the company has shipped a new model in Search on launch day (note, however, that it's currently only live for Google AI Pro and Ultra subscribers). Gemini 3 is now also live in the Gemini app, in AI Studio and Vertex AI, as well as in Google's agentic development platform, Antigravity. As is customary, Google immediately showed off Gemini 3's score in various benchmarks. For example, Gemini 3 Pro is now the leader in LMArena with a score of 1501 Elo, ahead of xAI's Grok, and Gemini 3's predecessor, Gemini 2.5 Pro. And if you want to push it a little further, you can run Gemini 3 Deep Think, which is even better in reasoning and more suited for the most complex tasks. That model is notably better in certain benchmarks, including the notoriously tough Humanity's Last Exam. On top of all that, Google also redesigned its Gemini app. One key change is the My Stuff folder, where you can find the various chats and documents you've created with Gemini. The company also improved the shopping experience, and started experimenting with generative interfaces, which are created by the model as you prompt it. Finally, the Gemini Agent is a feature which can complete multi-step tasks for you, including managing your appointments and reminders, organizing your inbox, or performing research online.
[43]
'Holy S***... I'm Not Going Back to ChatGPT,' Says Marc Benioff After Using Gemini 3 | AIM
His comments immediately intensified comparisons between Google and OpenAI, the two companies locked in the most closely watched AI rivalry. Salesforce CEO Marc Benioff has sparked a fresh wave of debate in the AI world after declaring that Google's newly launched Gemini 3 has decisively overtaken ChatGPT. In a post on X, Benioff wrote that he has used ChatGPT "every day for 3 years," but after spending two hours with Gemini 3, he's "not going back," calling the leap in reasoning, speed, images and video "insane." Benioff's emphatic endorsement comes just as Google unveiled Gemini 3, its most powerful multimodal AI model to date and Nano Banana Pro, an advanced image-generation system built on top of it. His comments immediately intensified comparisons between Google and OpenAI, the two companies locked in the most closely watched AI rivalry. According to Google, Gemini 3 brings major improvements in complex reasoning, multimodal understanding, and tool-use capabilities. It integrates text, images, video, and code processing, positioning it as Google's first truly general-purpose, agentic AI system across consumer and enterprise products. Alongside it, Google introduced Nano Banana Pro, a new image-generation and editing model that promises studio-grade visuals, reliable text rendering, multilingual support, consistent brand styling, and high-resolution (including 4K) output. The model is already rolling out across Google Workspace and the Gemini app, signalling Google's push to tie creative workflows directly into its AI ecosystem. The launches represent one of Google's most aggressive bids yet to reclaim AI leadership, especially as OpenAI continues rapid advancements with its GPT-5 series. Benioff's praise adds fuel to that momentum, offering rare public validation from a long-time power user of competing AI systems.
[44]
Google's Gemini 3 AI model makes its long-awaited debut, crushing rivals on top benchmarks - SiliconANGLE
Google's Gemini 3 AI model makes its long-awaited debut, crushing rivals on top benchmarks Google LLC has come up with the perfect response to the bevy of artificial intelligence announcements at Microsoft Ignite this week, launching its most intelligent model: Gemini 3. The launch of Gemini 3 has been hotly anticipated for several months, and Google is doing everything it can to ensure that users won't be disappointed. As the company's smartest model yet, Google says, it possesses industry-leading reasoning capabilities and the ability to learn almost any new concept. It's being made broadly accessible from the get-go, launching in the Gemini application, Google Search and developer platforms such as AI Studio, Vertex AI and Google Antigravity - an all new agentic development platform. Smarter, more thoughtful In a blog post, Google DeepMind Chief Executive Demis Hassabis claimed that Gemini 3 is the "best model in the world for multimodal understanding," as well as one of the most capable vibe coding models ever released. This is based on its high-level reasoning capabilities, which enable Gemini 3 to outperform its predecessor Gemini 2.5 on every major AI benchmark. Gemini 3 has jumped straight to the top of the LMArena Leaderboard with an all-time high score of 1,501 points, Hassabis said, and it also demonstrates Ph.D.-level reasoning capabilities with a 37.5% score on Humanity's Last Exam and a 91.9% score on GPQA Diamond. Moreover, it excels at mathematics, achieving a new high score of 23.4% on the MathArena Apex benchmark, while its factual accuracy also sets a new standard for LLMs, with a 72.1% score on SimpleQA Verified. Hassabis said users will immediately notice that Gemini 3 brings a lot more nuance and depth to every interaction it has with them, with smarter, more concise and direct responses, "trading cliche and flattery for genuine insight". Google has worked hard to ensure Gemini 3 tells users what they need to hear, rather than what they want to hear, Hassabis explained, allowing it to act more like a "true thought partner". And if Gemini 3 doesn't prove smart enough when pressed with more complex queries, users will be able to activate a new "Deep Think" mode, which enhances its reasoning skills by giving it more time to ponder before it generates a response. In addition, Gemini 3 boasts a unique ability to synthesize information about any topic across multiple modalities at once, including text, video, images, code and audio. "For example if you want to learn how to cook in your family tradition, Gemini 3 can decipher and translate handwritten recipes in different languages into a shareable family cookbook," Hassabis said. Alternatively, if people wants to educate themselves about any new topic they're unfamiliar with, they can feed Gemini 3 with various academic papers, video-based lectures and tutorials, and it will come up with a concise lesson, including interactive flashcards and visualizations, to help the user digest the material more easily. In addition, anyone who explores the AI Mode in Search will be treated to new Gemini 3-enabled experiences, such as immersive visual layouts, interactive tools and simulations, which will be quickly generated based on their query. Higher-level agentic coding Developers will likely be among the most avid users of Gemini 3, for Hassabis claims it's now the industry's best vibe coding model, thanks to its ability to handle more complex prompts and instructions. His claim is backed up by yet more benchmarks, with the model achieving a score of 1,487 on ELO and 54.2% on Terminal-Bench 2.0, which tests models for their ability to use a computer via the terminal. Hassabis said developers will be able to get started writing code with Gemini 3 from today in AI Studio, Vertex AI and the Gemini Command Line Interface tool, as well as third-party developer tools such as Cursor, GitHub, JetBrains, Replit and Manus. However, the most exciting way to access Gemini 3 may be with Google Antigravity, which is a new agentic development platform that supports agentic automation at a much higher, task-oriented level. Antigravity promises to transform developer productivity by evolving AI from an assistant into a true partner that's able to perform work on its own initiative. The platform looks much like a typical coding environment, but the agents have been granted direct access to the editor, the terminal and a web browser, giving them all of the tools they need to autonomously plan and execute complex programming tasks on behalf of humans. The Antigravity platform is powered by Gemini 3, and it's also tightly integrated with Google's Gemini 2.5 Computer Use model that provides advanced browser control, as well as its best image editing model, Nano Banana. Stronger security As always with new AI models, there's a security angle, and Google claims that Gemini 3 is also the most secure it has yet released. It said the model has undergone the most extensive set of evaluations of any of its models, being tested against critical domains in its own Frontier Safety Framework, and also by third-party organizations such as Apollo, Vaultis and Dreadnode. The results of those tests show that Gemini 3 demonstrates significantly reduced sycophancy and higher resistance to prompt injection attacks and malware-based attacks, Hassabis said. Meanwhile, Gemini 3's Deep Think mode is currently undergoing additional safety evaluations and input safety tests, prior to being made available to Google AI Ultra subscribers in the coming weeks.
[45]
Gemini 3 Can Help You Learn, Build Anything
We may earn a commission when you click links to retailers and purchase goods. More info. Google is introducing Gemini 3, its newest and most intelligent model, but is also making news by shipping it on day 1 to Search users. This is a first for Google, meaning a lot of people now have access to it. So what makes Gemini 3 so special? Benchmark numbers mean hardly anything to us, but Google says that Gemini 3 combines all of Gemini's capabilities together. It says Gemini 3 is much better at figuring out the context and intent behind a request, which sometimes AI would have a hard time with. At its core, Gemini 3 can "learn anything" and "build anything." Google provides a few examples of things you can do with this power. AI Mode in Search is getting Gemini 3 access, enabling new generative UI experiences such as "immersive visual layouts" and "interactive tools and simulations." These results will be tailored to your specific query. For building, developers should appreciate the improvements Gemini 3 brings. Google says it has top scores on the WebDev Arena leaderboard at 1487 Elo, 54.2% on Terminal-Bench 2.0, and outperforms Gemini 2.5 Pro on SWE-bench Verified (76.2%). Whether you're building a website, a game, or an app, Gemini 3 should be more helpful than previous iterations. Google says Gemini 3 is available inside of AI Studio now. Gemini Agent: This is an experimental feature that handles multi-step tasks directly inside Gemini. It connects to Google apps to manage different aspects, such as your calendar and email inbox. For example, you can ask it to, "Research and help me book a mid-size SUV for my trip next week under $80/day using details from my email." Gemini will then locate the email with flight information, compare rentals within your proposed budget and prepare the booking for you to look over. This function will first be available to Google AI Ultra subscribers, starting today. For those wanting to get their hands on Gemini 3, it's available today inside of the Gemini app. Google AI Pro and Ultra subscribers can access it also in AI Mode in Search, while developers can access it today inside of the Gemini API in AI Studio.
[46]
Gemini 3 and Antigravity, explained: Why Google's latest AI releases are a big deal | Fortune
The timing carries weight. Coming just seven months after Gemini 2.5 and less than a week after OpenAI released GPT 5.1, the rollout underscores the pace at which leading AI companies are advancing their technology. Google is making Gemini 3 immediately available through the Gemini app, which has more than 650 million monthly users, and through developer platforms including AI Studio and Vertex AI. "With Gemini 3, we're seeing this massive jump in reasoning," said Tulsee Doshi, Google's head of product for the Gemini model. "It's responding with a level of depth and nuance that we haven't seen before." Let's get nerdy for a second. Here's a look at Google's latest LLM by the numbers: What distinguishes this release is Google's emphasis on agentic capabilities -- the model's ability to plan and execute multi-step tasks with reduced human intervention. Demis Hassabis, CEO of Google DeepMind, described Gemini 3 as evolving from "simply reading text and images to reading the room." The model combines what Google calls state-of-the-art reasoning with multimodal understanding, processing text, images, video, audio and code simultaneously. That multimodal strength shows up in benchmarks focused on visual and video understanding. Gemini 3 scored 81% on MMMU-Pro and 87.6% on Video-MMMU -- higher marks than Claude Sonnet 4.5 and ChatGPT 5.1. The model can also generate what Google calls "generative UI" -- designing custom interfaces in real-time based on prompts, from interactive physics simulations to mortgage calculators. Antigravity represents Google's effort to reimagine developer tools for this new generation of AI. Unlike previous setups where an AI chatbot sits in the corner of your screen and you ask it questions, Antigravity puts the AI in charge of a dedicated workspace. The AI can look at your code, understand what you're trying to build, write code itself, test it, and catch problems -- all with less human intervention needed. Rather than treating AI as an assistant within the editor, Antigravity elevates agents to a dedicated surface with direct access to the editor, terminal, and browser. "Using Gemini 3's advanced reasoning, tool use and agentic coding capabilities, Google Antigravity transforms AI assistance from a tool in a developer's toolkit into an active partner," Google said in its announcement. The platform allows agents to autonomously plan and execute software tasks while validating their own code. Antigravity is free during its public preview, though users have reported rate limits that refresh every five hours. The platform includes access to Gemini 3 Pro, Anthropic's Claude Sonnet 4.5, and OpenAI's GPT-OSS. The Search integration marks another departure from past practice. "This is the first time we're shipping Gemini in Search on day one," Alphabet CEO Sundar Pichai wrote in the company's announcement post. Paying subscribers to Google AI Pro ($20 per month) and Ultra ($250 per month) can access Gemini 3 in Search's AI Mode, which uses the model's reasoning capabilities to generate visual layouts with interactive elements. "Gemini 3 is also making Search smarter by re-architecting what a helpful response looks like," Stein said. "With new generative UI capabilities, Gemini 3 in AI Mode can now dynamically create the overall response layout when it responds to your query -- completely on the fly." Google is also releasing Gemini 3 Deep Think, an enhanced reasoning mode that further improves performance on complex problems. The mode achieved 41.0% on Humanity's Last Exam without tools and 93.8% on GPQA Diamond -- as you might imagine, both were best-in-class. Deep Think will become available to Google AI Ultra subscribers after additional safety testing. Major development platforms are integrating the model. GitHub reported that Gemini 3 Pro demonstrated 35% higher accuracy in resolving software engineering challenges than Gemini 2.5 Pro in early testing. JetBrains noted more than a 50% improvement in the number of solved benchmark tasks. The model is also being added to Cursor, Manus, Replit and other coding tools. There's a lot of positivesentiment online right now, but of course, it's still early days. The release aims to put Google back in a competitive position it lost when ChatGPT launched in late 2022. After facing criticism over early Gemini outputs and setbacks with AI Overviews in Search, the company has steadied its approach. AI Overviews now reach 2 billion users monthly, and more than 70% of Google Cloud customers use the company's AI products.
[47]
Start building with Gemini 3
Today we are introducing Gemini 3, our most intelligent model that can help bring any idea to life. Built on a foundation of state-of-the-art reasoning, Gemini 3 Pro delivers unparalleled results across every major AI benchmark compared to previous versions. It also surpasses 2.5 Pro at coding, mastering both agentic workflows and complex zero-shot tasks. Gemini 3 Pro fits right into existing production agent and coding workflows, while also enabling new use cases not previously possible. It's available in preview at $2/million input tokens and $12/million output tokens for prompts 200k tokens or less through the Gemini API in Google AI Studio and Vertex AI for enterprises (see pricing for rate limits and full pricing details). Additionally, it can be utilized via your favorite developer tools within the broader ecosystem and is available, with rate limits, free of charge in Google AI Studio.
[48]
Google unveils Gemini's next generation, aiming to turn its search engine into a 'thought partner'
SAN FRANCISCO (AP) -- Google is unleashing its Gemini 3 artificial intelligence model on its dominant search engine and other popular online services in the high-stakes battle to create technology that people can trust to enlighten them and manage tedious tasks. The next-generation model unveiled Tuesday comes nearly two years after Google took the wraps off its first iteration of the technology. Google designed Gemini in response to a competitive threat posed by OpenAI's ChatGPT that came out in late 2022, triggering the biggest technological shift since Apple released the iPhone in 2007. Google's latest AI features initially will be rolled out to Gemini Pro and Ultra subscribers in the United States before coming to a wider, global audience. Gemini 3's advances include a new AI "thinking" feature within Google's search engine that company executives believe will become an indispensable tool that will help make people more productive and creative. "We like to think this will help anyone bring any idea to life," Koray Kavukcuoglu, a Google executive overseeing Gemini's technology, told reporters. As AI models have become increasingly sophisticated, the advances have raised worries that the technology is more prone to behave in ways that jumble people's feelings and thoughts while feeding them misleading information and fawning flattery. In some of the most egregious interactions, AI chatbots have faced accusations of becoming suicide coaches for emotionally vulnerable teenagers. The various problems have spurred a flurry of negligence lawsuits against the makers of AI chatbots, although none have targeted Gemini yet. Google executives believe they have built in guardrails that will prevent Gemini 3 from hallucinating or be deployed for sinister purposes such as hacking into websites and computing devices. Gemini 3 's responses are designed to be "smart, concise and direct, trading cliche and flatter for insight -- telling you what you need to hear, not just what you want to hear. It acts as a true thought partner," Kavukcuoglu and Demis Hassabis, CEO of Google's DeepMind division, wrote in a blog post. Besides providing consumers with more AI tools, Gemini 3 is also likely to be scrutinized as a barometer that investors may use to get a better sense about whether the massive torrent of spending on the technology will pay off. After starting the year expecting to spend $75 billion, Google's corporate parent Alphabet recently raised its capital expenditure budget from $91 billion to $93 billion, with most of the money earmarked for AI. Other Big Tech powerhouses such as Microsoft, Amazon and Facebook parent Meta Platforms are spending nearly as much -- or even more -- on their AI initiatives this year. Investors so far have been mostly enthusiastic about the AI spending and the breakthroughs they have spawned, helping propel the values of Alphabet and its peers to new highs. Alphabet's market value is now hovering around $3.4 trillion, more than doubling in value since the initial version of Gemini came out in late 2023. Alphabet's shares edged up slightly Tuesday after the Gemni 3 news came out. But the sky-high values also have amplified fears of a potential investment bubble that will eventually burst and drag down the entire stock market. For now, AI technology is speeding ahead. OpenAI released its fifth generation of the AI technology powering ChatGPT in August, around the same time the next version of Claude came out from Anthropic. Like Gemini, both ChatGPT and Claude are capable of responding rapidly to conversational questions involving complex topics -- a skill that has turned them into the equivalent of "answer engines" that could lessen people's dependence on Google search. Google quickly countered that threat by implanting Gemini's technology into its search engine to begin creating detailed summaries called "AI Overviews" in 2023, and then introducing an even more conversational search tool called "AI mode" earlier this year. Those innovations have prompted Google to de-emphasize the rankings of relevant websites in its search results -- a shift that online publishers have complained is diminishing the visitor traffic that helps them finance their operations through digital ad sales. The changes have been mostly successful for Google so far, with AI Overviews now being used by more than 2 billion people every month, according to the company. The Gemini app, by comparison, has about 650 million monthly users. With the release of Gemini 3, the AI mode in Google's search engine is also adding a new feature that will allow users to click on a "thinking" option in a tab that company executives promise will deliver even more in-depth answers than has been happening so far. Although the "thinking" choice in the search engine's AI mode initially will only be offered to Gemini Pro and Ultra subscribers, the Mountain View, California, company plans to eventually make it available to all comers.
[49]
Google released Gemini 3 Pro with Gemini Agent
Google announced Gemini 3 Pro, its most intelligent AI model, a few weeks before the first anniversary of Gemini 2. The model delivers state-of-the-art reasoning and class-leading coding performance. It becomes available immediately in the Gemini app, Google Search's AI Mode, and developer platforms. Gemini 3 Pro sets a new benchmark record on Humanity's Last Exam, a test recognized as one of the most challenging evaluations for AI systems. The model achieves 37.5 percent accuracy, surpassing the prior leader, Grok 4, by 12.1 percentage points. This result comes without reliance on external tools such as web search, demonstrating the model's standalone capabilities in handling complex, diverse questions across subjects. On the LMArena leaderboard, Gemini 3 Pro claims the top position with 1,501 points. This score reflects its performance across multiple evaluation categories, positioning it ahead of competing models in real-world user preference metrics compiled by the platform. Within the Gemini app, Gemini 3 Pro generates responses that are more concise and exhibit improved formatting. Users access the model by selecting "Thinking" from the model picker, making it available to all users. AI Plus, Pro, and Ultra subscribers benefit from higher rate limits, allowing extended usage before encountering restrictions. Gemini 3 Pro enables Gemini Agent, a feature that extends capabilities from Project Mariner, Google's web-surfing Chrome AI introduced at the end of the previous year. Gemini Agent performs specific tasks on behalf of users. For instance, when managing an email inbox, the agent executes the work directly rather than providing only general advice, such as organizing messages or drafting replies based on user instructions. To utilize Gemini Agent fully, users grant it access to their Google apps, integrating it seamlessly with services like Gmail and Calendar. This permission allows the agent to interact with personal data and complete actions within those environments. In Google Search's AI Mode, Gemini 3 Pro launches first for AI Pro and Ultra subscribers through the "Thinking" dropdown selection. The model extends to AI Overviews, targeting the most difficult queries posed to the search engine. A new routing algorithm for AI Mode and AI Overviews deploys in coming weeks, directing complex questions to Gemini 3 Pro automatically based on query analysis. Gemini 3 Pro enhances AI Mode's content discovery by refining the fan-out search technique. This method expands queries into multiple parallel searches. The model's advanced intelligence conducts additional searches and identifies relevant, credible content that earlier models overlooked, improving result comprehensiveness. The model's multi-modal understanding supports dynamic interfaces in AI Mode responses. For mortgage loan research, it generates an interactive loan calculator embedded in the output, enabling users to input variables like interest rates, loan amounts, and terms to compute payments and amortization schedules directly within the search interface. Developers and enterprise customers access Gemini 3 Pro via the Gemini API, AI Studio, and Vertex AI. These platforms provide APIs for integration, prompt testing in AI Studio, and scalable deployment through Vertex AI for production workloads. Google released Antigravity, a new agentic coding application. Antigravity programs autonomously, generates its own tasks to advance projects, and delivers progress reports to users. It handles full development cycles, from initial code generation to iterative improvements and status updates. Alongside Gemini 3 Pro, Google introduced Gemini 3 Deep Think, an enhanced reasoning mode. Initial availability goes to safety testers, followed by rollout to AI Ultra subscribers. This mode amplifies reasoning depth for particularly demanding tasks. Gemini 3 Pro's deployment spans consumer and professional applications. In the Gemini app, its concise formatting streamlines information presentation, reducing verbosity while maintaining completeness. The integration with Gemini Agent shifts from advisory responses to proactive execution, handling real-world workflows like email triage through direct API calls to Google services. AI Mode's fan-out augmentation with Gemini 3 Pro processes queries with greater precision, executing broader search variants to surface authoritative sources. The dynamic interfaces leverage the model's ability to interpret text, images, and data jointly, producing tools like financial calculators that respond to user inputs in real time. Antigravity's autonomous task creation allows it to break down coding projects into subtasks, such as designing algorithms, implementing functions, and debugging, while reporting milestones like completed modules or test coverage percentages. Gemini 3 Deep Think extends reasoning chains, applying iterative thinking to refine solutions before final output.
[50]
Gemini 3 just launched -- here are 5 powerful features you need to try first
Google's newest AI model has useful improvements you don't want to miss Google's Gemini 3 model is finally here and with impressive, record-breaking benchmark scores for writing and coding, it's worth checking out what all those upgrades can really do. Google has quietly improved reasoning, sped up everyday tasks and refreshed the Gemini experience across its app, Search and developer tools. After testing Gemini 3 vs ChatGPT-5.1, it's easy for me to see why this model is what everyone wants to get their hands on. Now available for free in the Gemini app and within Search for Ultra and Pro users, here are five real upgrades that make Gemini 3 more capable -- and more useful -- than previous versions. Google says Gemini 3 delivers state-of-the-art reasoning, and after using it for the last 24 hours, I can honestly say this is the biggest difference you'll notice right away. The model handles multi-step logic, planning tasks and difficult questions more reliably than Gemini 2.5. There are major upgrades across the board, including better long-form problem solving, clearer step-by-step explanations and significant improvements in math, logic and overall reasoning. You'll also see more accurate results on complex queries, which helps the model break down harder questions with far more precision. All of these enhancements power new features throughout the Google ecosystem -- especially inside Search, where Gemini 3's deeper reasoning is now front and center. If you thought Nano Banana was good for your most creative projects, you'll be blown away by Gemini 3's ability to interpret images, screenshots, handwritten documents and mixed inputs more accurately and with better explanations. In other words, multimodality got a major upgrade. These upgrades make the model far more useful for real-world tasks like troubleshooting issues, studying visual material and analyzing complex information without needing multiple tools or extra steps; everything can be done within the chat. Perhaps the most anticipated upgrade is they way Gemini 3 improves Google Search. Ultra and Pro users are now experiencing Gemini 3 powering Google's AI Overviews and improving how Search interprets open-ended questions. Because Google is notorious for giving away their most useful models for free, I have no doubt that those with Gemini Advanced or even the free tier will see Gemini 3 in search soon. Since trying Gemini 3 in Search, I've experienced smarter, more personal responses that better synthesize information across more sources. I've also seen more accurate and relevant summaries. Also, like me, you may notice less keyword matching and more actual understanding. For me, this is one of the biggest behind-the-scenes upgrades and most people might not realize that it's Gemini 3 making their results feel sharper. Here's a secret: I have never liked the Gemini app -- until now. With this latest upgrade, Google refreshed the Gemini app for faster performance, smoother interactions and improved reliability. A strong app + Gemini's better reasoning and you get a noticeably stronger all-purpose assistant. Even Gemini Live feels more capable. Give the app a try and you'll see improvements in speed and latency (something the Google teamed really stressed in a pre-rollout briefing), conversation accuracy, long, multi-step chats, task execution and follow-through. These changes make everyday use -- from writing to planning to helping with homework -- feel smoother. Speaking of homework, Google is also giving students a free year of its Pro plan. That includes unlimited chats, image uploads, quiz generation, access to the Gemini 3 Pro model, Deep Research, audio overviews and 2TB of storage. It's a sizable upgrade at no cost -- but only for students -- and the offer expires January 31, 2026. Google also rolled out improvements to its developer ecosystem, including updated APIs, support in Google AI Studio and better tooling for building agents and apps. I've said it before, but it's worth repeating, I am not a developer, but I am obsessed with the Google AI Studio and have made some pretty cool apps. The rollout of Gemini 3 gives developers (even if you don't know it yet, that's you) access to Gemini 3's reasoning capabilities including more stable APIs, faster prototyping in AI studio, improved pathways for integrating Gemini into apps and workflows. These upgrades will certainly shape the next generation of apps powered by Gemini. And, even if you never touch the code, the apps you rely on will quietly get faster, more capable and more helpful because of these changes. This update is a huge one. Gemini 3 is meaningful leap in reasoning, multimodality and daily usefulness. The improvements show up everywhere from Google Search to the Gemini app itself, and while many users may not realize it yet, this update lays the foundation for a much smarter Google ecosystem. If Google keeps iterating at this pace, Gemini 3 could mark the moment the company finally closes the gap with -- or even surpasses -- its biggest rivals. Your move, OpenAI.
[51]
Gemini 3 is here - 3 things to know about the major AI update
Search will use Gemini 3 to make interactive and visual responses to your more complex questions Google has officially debuted Gemini 3, marking a major upgrade for its AI models and the platforms employing them. New reasoning modes, fresh app features, and an overhaul of the Google ecosystem that could change how people engage with Google products in day-to-day life. The new model arrives with its usual stack of benchmarks and leaderboards, but the more interesting part is what it means for those skeptical about whether they should experiment with AI. Gemini 3 comes in two flavors: Gemini 3 Pro and Gemini 3 Deep Think. Pro is the everyday, full-featured version available immediately in apps, Search, and developer tools. Deep Think is the "enhanced reasoning" mode with an extra gear, currently in testing and destined for the deep-pocketed Google AI Ultra subscribers. But both share the same core. Google is positioning Gemini 3 as a leap forward in reasoning, not just raw size or speed. The company likes to talk about AGI "steps" in cautious forecasts, but the tone around this release is plainly more confident. The company is keen to boast how well Gemini 3 can perform at everything from parsing and translating a handwritten recipe to being able to merge it with a voice note and write a cookbook from the combination. That kind of multimodality is where Gemini 3 feels most transformed. The model's video analysis now understands movement, timing, and other details. It can even analyze a sports game and suggest a training plan for players. And its million-token context window means it can keep track of sprawling, real-world information without falling apart halfway through a long session. Gemini 3 doesn't arrive in isolation. The Gemini app is getting one of its largest overhauls, with a new interface navigation system and a "My Stuff" folder for every piece of AI-generated content you've prompted. The new app should feel more helpful by default, according to Google. The most noticeable change is the new generative interfaces. Gemini 3 builds them in real time based on what you're asking for instead of just using a template. As an example, Google suggested asking for help planning a vacation might produce a magazine-style itinerary complete with visuals. Or instead of a wall of text to answer a complicated question, you'd see a visually-heavy layout of diagrams, tables, and other illustrations. The Gemini app is also getting an agent to act on your behalf. If you give Gemini a task that requires a few dozen steps, it can carry them out using your connected Google apps. The agent is starting with Google AI Ultra members and expanding from there. Gemini 3 is changing how Google handles complex questions. For the first time, a Gemini model is available in Search immediately, and Google is routing the toughest queries to it. U.S. Google AI Pro and Ultra subscribers will see Gemini 3 Pro in AI Mode, with broader access coming soon Gemini 3 improves Google's fan-out approach of searching multiple interpretations of your question to find relevant content. Gemini 3 understands intent deeply enough to discover material that earlier versions routinely missed. The most striking upgrade is the new generative UI. When you ask a complex question, Gemini 3 constructs a layout with visuals, tables, grids, and even custom-coded interactive simulations to go with the answer. A question about the three-body problem produces a manipulable model. A question about loans generates a calculator tailored to the details you've included. Answers become more like small apps. There are plenty of links to the source material as well, theoretically encouraging follow-ups by users. Google says this system will evolve, particularly as automatic model selection routes more queries to Gemini 3 behind the scenes.
[52]
Google debuts Gemini 3 as DeepMind CEO discusses putting the entire search index into memory
After speculation began a month ago, Google Gemini 3 debuted on Tuesday, solidifying what many see as an already strong lead for Google in the world of AI-informed technology, even as other services, like OpenAI's ChatGPT, still dominate the chatbot market. Gemini 3 is everywhere you use Google services. If you pay for a "Pro" or "Ultra" subscription, it's in even more places to analyze documents, offer proactive suggestions for travel or shopping plans, design a website, and more. Google also announced this week that If you're a college student in the US, you can use Gemini 3 for free. According to Google leaders, Gemini 3 is, of course, just the beginning. Demis Hassabis, the co-founder and CEO of Google DeepMind, the AI-developing arm at Google, has said as much. In an interview with Alex Heath for the excellent Sources newsletter, Hassabis said a future Google AI tool might include every site in the Google search index in a future version of Gemini. Court documents have revealed that the number is about 4 billion documents. One day, Gemini could have all of those documents in its memory. Here's the exchange: Sources: I've heard there's internal interest in fitting the entire Google search index into Gemini, and that this idea dates back to the early days of Google, when Larry Page and Sergey Brin discussed it in the context of AI. What's the significance of that if it were to happen? Hassabis: Yeah. We're doing lots of experiments, as you can imagine, with long context. We had the breakthrough to the 1 million token context window, which still hasn't really been beaten by anyone else. There has been this idea in the background from Jeff Dean, Larry, and Sergey that maybe we could have the entire internet in memory and serve from there. I think that would be pretty amazing. The question is the retrieval speed of that. There's a reason why the human memory doesn't remember everything; you remember what's important. So maybe there's a key there. Machines can have way millions of times more memory than humans can have, but it's still not efficient to store everything in a kind of naive, brute force way. We want to solve that more effectively for many reasons. This modal appears on the web-native version of Google Gemini for users who have not tried to the third version yet. That's the possible future of Gemini, but what can this third version do? Essentially, Gemini 3 "thinks" better than previous versions, and it doesn't offer near-instantaneous answers like Gemini 2.5, which was released in March, does. Google explains that it takes longer for an answer to form because it is "built to grasp depth and nuance," in the words of a Google blog post. The wait is no surprise, though; "thinking" or "reasoning" LLM models take longer to yield answers to prompts. The increased attention to nuance means the Gemini 3 large language model can perceive "subtle clues in a creative idea," based on whatever text you type (or say) into it. A Google modal that shows up on gemini.google.com for users who haven't yet used Google Gemini 3 reads, "Gemini 3 Pro is here. It's our smartest model yet -- more powerful and helpful for whatever you need: Expert coding & math help; next-level research intelligence; deeper understanding across text, images, files, and videos." In my basic testing of Gemini 3 using my Google Pro subscription at gemini.google.com, I can see the new model -- called "Thinking" -- as one of the options. And the results are richer, too. Perhaps the best compliment that can be paid to the progress of Gemini, aside from the various third-party reviews and hot takes that have sprung up in the last 24 hours, is the compliment from a competitor. Sam Altman, CEO of OpenAI, which is the largest competitor to Gemini with its ChatGPT chatbot, extended congratulations on Wednesday, posting on X: "Congrats to Google on Gemini 3! Looks like a great model." How well does Gemini 3 plan a road trip? And how it compares to Gemini 2.5 Using Gemini 2.5, I asked for a road trip itinerary from Brooklyn, New York, to Columbus, Ohio, next month. In a matter of seconds, I received a table that broke down the trip by hour, including suggested breaks. I also received general information about it being a busy travel time of year, tips on the brutal Pennsylvania weather, tolls, and a Google Maps version of my route. When I asked the same question of Gemini 3, the wait for data was about a minute as Gemini 3 offered cute text updates like "Refining directional nuance." I again received a table of suggestions for when to leave and when to stop, but I also was met with warm recommendations for lunch, including a space that offered "farm-to-table food in a cool artistic space." Gemini also suggested detours like this one: "Alternative Scenic Stop: If you make excellent time and want a historic detour, Flight 93 National Memorial is located near Shanksville, PA (off US-30, a slight detour from the Turnpike). It is a somber but beautiful site, especially in winter." There are, of course far more advanced use cases for Gemini 3. One user created a 3D LEGO editor: How to find and use Google Gemini 3 Here's where to look If you have a Google Gemini Pro subscription, you can use Google Gemini 3 in the search giant's AI Mode, which rolled out earlier this year as an alternative to traditional searches. You can also use Gemini 3 in the phone app and on the web. If you're a developer, you can access Gemini 3 in AI Studio and Vertex AI, as well as in Google Antigravity, the company's AI-assisted developer platform that also debuted this week. If you want to use Gemini 3 for free, you can, but the usual usage limits apply, similar to how other LLMs like ChatGPT and Claude limit the number of prompts per day.
[53]
Gemini 3 Finally Arrives for Smart Devices along with New App Tweaks - Phandroid
Google recently announced its latest update to the Gemini app, which finally brings over the much-awaited Gemini 3 model. Google says that version 3 is capable of better responses with enhanced formatting, in addition to improved performance in areas like code generation and multimodal understanding. As such, the update brings Generative Interfaces -- this includes a "visual layout" for immersive, magazine-style views (like a travel itinerary) and "dynamic view" for real-time, custom-coded interactive interfaces which can immediately adapt to a user's prompt. There's also support for an experimental Gemini Agent, a new tool that orchestrates and completes complex, multi-step tasks across Google apps, such as organizing an inbox or researching and preparing a car rental booking based on details in a user's email. The Gemini Agent is rolling out first to Google AI Ultra subscribers in the U.S. As for the Gemini app itself, Google has tweaked the UI a bit, which should make chats and finding created content a lot easier, as well as improved shopping results with integrated product listings, comparison tables, and prices directly from Google's Shopping Graph. Gemini 3 Pro is rolling out globally starting today and is accessible by selecting "Thinking" in the model selector.
[54]
Google Launches Gemini 3, Claims Benchmark Lead Over GPT-5.1 and Claude Sonnet 4.5 | AIM
Google on Tuesday announced Gemini 3, calling it another big step on the path toward AGI. "It's state-of-the-art in reasoning, built to grasp depth and nuance -- whether it's perceiving the subtle clues in a creative idea, or peeling apart the overlapping layers of a difficult problem," said Google CEO Sundar Pichai in a statement. Google said it is rolling out Gemini 3 across its major products, including Search. The model is now live in AI Mode in Search with expanded reasoning capabilities and new dynamic experiences. The model is also available in the Gemini app, as well as to developers through AI Studio, Vertex AI and Google's new agent-focused development platform, Google Antigravity. Demis Hassabis, CEO of Google DeepMind, and Koray Kavukcuoglu, the company's CTO and chief AI architect, announced in a joint statement that Gemini 3 Pro is now available in preview. "We're beginning the Gemini 3 era," they said, noting that the model is being integrated into Search, Workspace, the Gemini app and developer platforms. Google said Gemini 3 Pro outperforms Gemini 2.5 Pro, OpenAI GPT-5.1 and Claude Sonnet 4.5 across major AI benchmarks, including LMArena, Humanity's Last Exam, GPQA Diamond and MathArena Apex. The company highlighted improvements in multimodal capabilities, citing scores of 81% on MMMU-Pro and 87.6% on Video-MMMU. It also recorded 72.1% on SimpleQA Verified, a measure of factual accuracy. The launch also introduced Gemini 3 Deep Think, an improved reasoning mode. Google said it scores 41% on Humanity's Last Exam, 93.8% on GPQA Diamond and 45.1% on ARC-AGI-2 with code execution. "Deep Think pushes the boundaries of intelligence even further," the company said. With broader multimodal input, longer context and new planning abilities, Google said users can apply Gemini 3 to tasks such as analysing research papers, translating handwritten family recipes, generating visualisations, or evaluating sports performance. In Search, AI Mode now supports generative UI elements and interactive simulations. For developers, Google launched Google Antigravity, an agent-first development platform built around Gemini 3. The company said Antigravity allows agents to "autonomously plan and execute complex, end-to-end software tasks" with direct access to an editor, terminal and browser. Gemini 3 also integrates with tools including Google AI Studio, Vertex AI, Gemini CLI, Cursor, GitHub, JetBrains and Replit. The model's long-horizon planning was cited as another improvement. Google said Gemini 3 Pro leads the Vending-Bench 2 leaderboard, sustaining consistent decision-making over a simulated year of operations. Subscribers to Google AI Ultra can access these agentic capabilities through Gemini Agent in the Gemini app. Google emphasised expanded safety testing, saying Gemini 3 has undergone its most extensive evaluations to date, including assessments by external partners such as Apollo, Vaultis and Dreadnode. "Gemini 3 is our most secure model yet," the company said, noting reduced sycophancy, better prompt-injection resistance and stronger protection against misuse.
[55]
Google releases its heavily hyped Gemini 3 AI in a sweeping rollout | Fortune
Google released its Gemini 3 AI model today after weeks of social-media hype, vague posting, and wink emojis. An early-morning leak of the Pro version's model card -- which outlines key details about the system and its benchmark performance -- had developers posting on X as though Santa had arrived early. Even former OpenAI researcher and co-founder Andrej Karpathy joked about the buildup: "I heard Gemini 3 answers questions before you ask them. And that it can talk to your cat," he wrote on X. It remains to be seen, of course, whether the model lives up to the hype that it would, as one X user put it, "absolutely crush" all other state-of-the-art models, including OpenAI's GPT-5 and Anthropic's Claude Sonnet 4.5/Opus 4 and xAI's Grok 4. But what is clear is that Google's confident, widespread release of Gemini 3, in Pro and "Deep Think" versions, is a long way from its tentative debut of the first Gemini model in February 2024 -- when the company faced intense backlash over "woke" outputs and ahistorical or inaccurate images and text, ultimately admitting it had "missed the mark." Its Gemini-powered AI Overviews in Search also triggered an online furor after the system famously told users to eat glue and rocks. This time around, Gemini 3 is getting a sweeping day-one rollout across a large swath of Google's ecosystem with its billions of users -- including its fastest-ever deployment into Google Search. "This is the very first time we're shipping our latest Gemini model in search," said Robby Stein, vice president of product for Google Search, in a press preview. That includes Google's AI Pro and Ultra subscribers getting access to Gemini 3 in Search's AI Mode, with new visual layouts featuring interactive tools and elements like images, tables and grids. Google also benefits from the fact that, unlike during past AI rollouts, OpenAI didn't manage to steal its thunder this time. OpenAI already debuted its massively hyped GPT-5 model in August -- a release many observers said fell short and was underwhelming. Last week, the company released a 5.1 update it described as "smarter" and "more conversational," with eight different "personalities" to choose from, but that still left the door wide open for Google to make Gemini waves. In a blog post introducing Gemini 3, Alphabet and Google CEO Sundar Pichai boasted that Google's AI Overviews now has 2 billion users per month, while the Gemini app has more than 650 million active monthly users, and more than 13 million developers are building with Gemini, Google's "most intelligent model." Today, he wrote, "we're shipping Gemini at the scale of Google." Google also crowed about the model's results on major AI industry benchmarks, saying that it beat the earlier Gemini 2.5 Pro on every major test of reasoning. It said the model performs extremely well on academic-style challenges testing logic, math, science, and problem-solving, reaching scores that Google claimed resemble "PhD-level" reasoning. It also said the model improved on factual accuracy. Google also claimed the model is more thoughtful and useful in conversation: Instead of giving generic flattery or buzzword-filled answers -- much-disliked features of many chatbot responses -- it's supposed to offer clearer, more direct insight. In addition, Google said Gemini 3 has "undergone the most comprehensive set of safety evaluations of any Google AI model to date," adding that the model shows "reduced sycophancy, increased resistance to prompt injections and improved protection against misuse via cyberattacks." Over the past year, AI security experts have shared many examples of Gemini's vulnerability to prompt injections, in which attackers manipulate the model by embedding malicious instructions into its input, and other types of threats. Amid rising publisher fears that Google's AI Overviews are causing a "traffic apocalypse" that kills click-throughs to news sites, Google continued to insist that it will keep connecting users to publisher content. That reassurance comes despite research showing that users are less likely to click on result links when an AI summary appears -- and that when AI summaries do surface sources, users rarely click through to them. "We continue to send billions of clicks to the web every day, and we're prominently highlighting the web in our Search AI experiences in a way that encourages onward exploration," a Google spokesperson told Fortune by email. "As always, we continue to prominently display links to the web throughout the AI Mode response, so people can continue learning and exploring." Google also pointed to its "query fan-out technique" -- essentially, taking a user's single question and breaking it into many smaller, behind-the-scenes searches to gather more relevant information. "Now, not only can it perform even more searches to uncover relevant web content, but because Gemini more intelligently understands your intent, it can find new content that it may have previously missed," the spokesperson said. "This means Search can help you find even more highly relevant web content for your specific question." No matter how Gemini 3 is received, there's little question that Google is far ahead of where it stood less than three years ago, when ChatGPT's arrival sparked an internal "code red." The company is also playing to its strengths, looking directly at what its billions of consumers want -- including unveiling first-of-its-kind generative shopping interfaces in the Gemini app with product listings, comparison tables and live pricing pulled from Google's 50-billion-item Shopping Graph. That, of course, is Google's not-so-secret sauce: the massive amounts of data that flows through its products every day. And Gemini 3 is yet another reminder that few companies, if any, have the data foundation or the global reach to ship AI at this scale. Still, even Pichai, the company's CEO, is still urging caution when it comes to AI. In a new interview with the BBC, he said people should not "blindly trust" everything AI tools tell them, adding they are "prone to errors" and urging people to use them alongside other tools. Pichai also warned that no company would be immune if the AI bubble burst. Presumably even Google.
[56]
Gemini App and AI Mode in Search Get New Features With Gemini 3 AI Model
* Google is adding automatic model selection to Search * AI Mode in Search can now generate visual layouts * Both the Gemini app and AI Mode now support dynamic views Google has now released the Gemini 3 Pro artificial intelligence (AI) model. Although it is available in preview, and the rollout across the globe might take a couple of days, the Mountain View-based tech giant has already started announcing its integration across its suite of products. Two of the company's products that are getting the model's capabilities are the Gemini app and AI Mode in Search. The tech is introducing a new capability called generative interfaces in both of these platforms, which will offer users a more visual and interactive way of finding information. New Features in Gemini App In a blog post, Josh Woodward, the Vice President of Google Labs, Gemini and AI Studio at Google, introduced new improvements in the Gemini app. While the company uses the phrase "app," these changes will be available across the Android and iOS app, as well as the Gemini web interface. The company says it is redesigning the Gemini platform with a clean and modern look. What that means is users will find it easer to start chats; find images, videos, and reports they have created via a new My Stuff folder; and enjoy shopping for products with the integration of Google's Shopping Graph. However, the biggest introduction is generative interfaces or generative UI. In a separate blog post, the company describes this capability as an interface "which dynamically creates immersive visual experiences and interactive interfaces -- such as web pages, games, tools, and applications -- that are automatically designed and fully customized in response to any question, instruction, or prompt." The Gemini app is first releasing generative interfaces with two experiments. First is visual layout. It is a magazine-style view that features photos and modules. So, if a user prompts the chatbot for a itinerary for a trip, Gemini might show a carousel of different types of trips that users can tap on to select, a slider might let them select how many days they will be on a vacation, and another bar might let them click on different point-of-interests. The selection will then allow the AI to curate the right plan, without them having to spell it out by either typing or speaking. Second is dynamic view. Using Gemini 3's agentic coding capabilities, the chatbot can design and code a custom user interface in real-time in response to a user prompt. For instance, if a user asks Gemini to "explain the Van Gogh Gallery with life context for each piece," it will generate an interactive window where users can click on elements, scroll, and slide to learn about the topic visually. Both of these features are rolling out now, although the company says users will initially only see one of them to help the company compare between the two. Finally, the company is also introducing Gemini Agent. It is an experimental feature that can perform multi-step tasks within the app. However, it can connect to Google apps to manage the user's Calendar, add reminders, and even organise their inbox by drafting replies to emails. It can also perform web-based tasks such as making bookings or taking appointments. It is currently available on the web for Google AI Ultra subscribers in the US. New Features in AI Mode AI Mode in Search is also getting a few new capabilities with the integration of Gemini 3 Pro. However, these are first rolling out to the Google AI Pro and AI Ultra subscribers in the US. They can now select "Thinking" from the model dropdown menu to access the latest AI model. Currently, the rate limit with Gemini 3 Pro will be limited, but the company states that it will be increased soon. So, what's new in AI Mode? Due to improved reasoning capability, AI Mode can now tackle more complex queries and intelligently sift through a large number of web pages to find the contextually relevant responses. A new automatic model selection tool is also being added to Search, which will route user's more challenging questions in AI Mode and AI Overviews directly to Gemini 3 Pro. For simpler questions, it will continue to use the faster models. Generative interfaces are also making their debut in AI Mode. Google says this will allow the AI tool to create visual layouts for responses in real-time, complete with interactive tools and simulations based on user queries. "When the model detects that an interactive tool will help you better understand the topic, it uses its generative capabilities to code a custom simulation or tool in real-time and adds it into your response," the post explained. Highlighting an example, the tech giant said if a user is researching mortgage loans, Gemini 3 in AI Mode can create a custom interactive loan calculator directly in the interface to help them compare different options.
[57]
Google Search with Gemini 3: Our most intelligent search yet
Today, we introduced Gemini 3, our most intelligent model with state-of-the-art reasoning, deep multimodal understanding and powerful agentic capabilities. It's now available in Google Search, starting with AI Mode -- marking the first time we've brought a Gemini model to Search on day one. Gemini 3 brings incredible reasoning power to Search because it's built to grasp unprecedented depth and nuance for your hardest questions. It also unlocks new generative UI experiences so you can get dynamic visual layouts with interactive tools and simulations -- generated specifically for you. Here's how Gemini 3 is supercharging Search. Starting today, Google AI Pro and Ultra subscribers in the U.S. can use Gemini 3 Pro, our first model in the Gemini 3 family of models, by selecting "Thinking" from the model drop-down menu in AI Mode. With Gemini 3, you can tackle your toughest questions and learn more interactively because it better understands the intent and nuance of your request. And soon, we'll bring Gemini 3 in AI Mode to everyone in the U.S. with higher limits for users with the Google AI Pro and Ultra plans.
[58]
Gemini 3 is here -- Google's most powerful AI model yet is crushing benchmarks, improving search and outperforming ChatGPT
Google just dropped Gemini 3, and it's already shattering benchmarks. Gemini 3 Pro now sits at the top of the LMArena leaderboard with what Google calls a "breakthrough score," trouncing Gemini 2.5 Pro across every major test -- math, long-form reasoning, multimedia understanding, you name it. Oh, and yes, it translates, too. This is the smartest model Google has built, period. The good news is Google isn't wasting any time rolling it out, either. Gemini 3 is already live in Google Search, the Gemini app and a full suite of developer tools -- marking one of the company's most aggressive moves yet in the AI arms race. Starting today, Google AI Pro and Ultra subscribers in the U.S. can start using Gemini 3 Pro directly inside Search by selecting "Thinking" from the model selector. Google's reworked how Search finds information behind the scenes; a complete game-changer for the way we Google things. Gemini 3 digs deeper and genuinely understands what you're actually asking for in terms of context, not just keywords. Gemini 3's deeper reasoning allows Search to perform more background queries, uncover sources older models would have missed and better understand the intent behind your question, beyond the keywords. The result is cleaner, more accurate answers with fewer irrelevant results or hallucinations. And there's more on the way. Google says automatic model routing is coming soon, sending simple questions to lighter models and reserving Gemini 3 for the hard stuff. It is efficiency meets intelligence -- and a clear sign of where Search is headed next. This is where things get really interesting. Instead of just spitting out text answers, Gemini 3 can now generate interactive tools, visualizations and even simulations right inside your search results. Curious about the physics of the three-body problem? Gemini 3 will build you a live simulation you can actually play with. Shopping for a mortgage? It'll generate an interactive loan calculator where you can plug in different numbers and see the results instantly. The layout adapts to your question so no two searches look quite the same. Google is calling Gemini 3 its "most intelligent model" yet, and the specs back that up: Soon, Google will also roll out automatic model selection. Simple searches questions will get answered faster with lighter models, while the tough stuff gets routed to Gemini 3. And, Google says this is just the beginning and users can expect more dynamic visual tools and creative layouts to roll out in the upcoming months. Gemini 3 is Google's biggest swing at AI yet. It's smarter, more interactive and can actually handle even bigger complex tasks. But the real shift is how tightly Google's weaving it into Search itself. With visual tools that build themselves, simulations that run in real time and deeper reasoning happening behind the scenes, Search is completely shifting from a search engine towards an AI assistant that actually knows what you're trying to do. Gemini 3 is already available within the Gemini app and AI Mode in Search Google AI Pro and Ultra users in the US only for now. Deep Think mode launches in the coming weeks for Ultra subscribers.
[59]
Is Google-Alphabet becoming the new leader of AI stocks as Gemini 3 gains power?
Google's new Gemini 3 model is making the company a strong name in the AI world. The tool is getting a lot of attention from tech leaders and investors. Some analysts say this rise could change the competition between Google and OpenAI. Many major AI stocks are moving because of this growing excitement around Google's AI progress. Google's new AI model Gemini 3 is making waves. The model, released by Alphabet (Google's parent company), got great reviews and is boosting Google stock, says Melius Research analyst Ben Reitzes.Big tech leaders are impressed. Salesforce CEO Marc Benioff wrote on X that after trying Gemini 3, he's "not going back" to ChatGPT. Google shares rose nearly 5% on Monday after an 8% gain last week. Year-to-date, Google is up 64%, making it the top performer among the Magnificent Seven tech giants for 2025, as reported by Investors.com. Even OpenAI is noticing Google's strength. OpenAI CEO Sam Altman said in an internal memo that OpenAI has "some work to do but we are catching up fast," and warned the "vibes out there" could be rough for a bit. Google's rise could be a risk for other AI stocks. Ben Reitzes said Google's potential to dominate AI models is "the risk to watch" and could matter more than concerns about AI asset depreciation or circular deals. Losing out could hurt AI partners. Reitzes wrote, "If Google wins and OpenAI loses, a lot of spending goes away from Nvidia (NVDA), AMD (AMD), Microsoft (MSFT), CoreWeave (CRWV), Oracle (ORCL) and even Broadcom (AVGO)", as stated by Investors.com. OpenAI partners with many companies and plans to spend over $1 trillion on AI infrastructure in the next eight years. It still needs new users and enterprise customers to reach its goals. Google has its own cloud and AI chips. Google runs its hyperscale cloud with custom AI chips but still works with Nvidia and Broadcom. Broadcom also partners with OpenAI, so companies "need more than just Google," Reitzes said. Reitzes noted that AI models often "leapfrog" each other, so OpenAI will likely respond next year. He said, "Gemini 3 looks great, but it hasn't won yet" Google and AI stocks are bouncing back in the market.Despite fears of an AI bubble last week, Wedbush analyst Dan Ives said the AI bull market will continue, driven by Big Tech spending. He called it "an AI Arms Race" and highlighted Nvidia's guidance as key for 2026 growth, as per the report by Investors.com. Q1. Is Google becoming the new leader in AI because of Gemini 3? Yes, Google is gaining momentum with Gemini 3, which is boosting its stock and creating strong competition for OpenAI. Q2. How is Google's Gemini 3 affecting other AI companies? Analysts say Google's rise could reduce spending for firms like Nvidia, AMD, Microsoft, Oracle, and Broadcom if OpenAI falls behind. (You can now subscribe to our Economic Times WhatsApp channel)
[60]
Google Gemini 3 Review : Real Projects, Code & Honest Benchmarks, You Might Be Surprised
What if the AI revolution isn't quite as seamless as it seems? Google's Gemini 3, hailed as a cornerstone in the race toward Artificial General Intelligence (AGI), has been making waves with its bold claims and innovative features. From its ability to process text, images, and code simultaneously to its standout performance in UI design, Gemini 3 is marketed as a fantastic option. But beneath the polished announcements and glowing benchmarks lies a more complex reality, one filled with inconsistent coding performance, overhyped promises, and challenges that could redefine how we view this so-called "next-generation" AI. Is Gemini 3 truly the breakthrough it's made out to be, or is it another example of tech hype outpacing real-world utility? In this review, AI Labs explains what the headlines don't tell you about Gemini 3. You'll learn about its multimodal capabilities and how they set new standards in design, but also where it stumbles, particularly in live coding environments and critical developer tools. We'll also examine the innovative features Google has introduced, from AI-enhanced search to its ambitious "Vibe coding" concept, and whether they live up to their potential. By the end, you'll have a clearer picture of whether Gemini 3 is the future of AI or just another step in a long, winding road. Sometimes, the truth about innovation isn't in the spotlight, it's in the shadows of what's left unsaid. Gemini 3 Overview Gemini 3: Aiming for AGI Gemini 3 is designed as a cornerstone in Google's ambitious journey toward AGI, seamlessly integrating into its ecosystem as the default AI model for various applications. Its multimodal capabilities enable it to process and interpret diverse inputs, including text, images, and code, making it a versatile tool for professionals across industries. Google claims that Gemini 3 surpasses its predecessors and competitors, setting new benchmarks in AI performance. The model's ability to handle complex tasks and integrate seamlessly into Google's ecosystem underscores its potential. For example, its multimodal design allows it to analyze and synthesize information from multiple formats simultaneously, offering users a more dynamic and efficient experience. However, whether it fully lives up to Google's claims remains a subject of debate, as its performance varies across different domains. Performance Benchmarks: Where It Shines and Stumbles Gemini 3 has demonstrated impressive results in specific areas, particularly in competitive programming. It outperformed notable competitors like Claude 4.5 and GPT 5.1 in Live Codebench Pro, a benchmark designed to evaluate AI performance in competitive programming scenarios. Additionally, it achieved a score of 67.2% on Swebench, a test that measures performance on real-world GitHub issues, showcasing its ability to address practical coding challenges. However, its performance in Terminal Bench 2, which evaluates live terminal environments, has been inconsistent. This inconsistency highlights a critical limitation: while Gemini 3 excels in controlled environments, it struggles with the unpredictability of real-world coding tasks. For developers working on high-stakes projects, this raises questions about its reliability and practical utility. Here's What They Didn't Tell You About Gemini 3 Check out more relevant guides from our extensive collection on Gemini 3 that you might find useful. UI Design: A Clear Standout One of Gemini 3's most notable strengths lies in its UI design capabilities. The model excels at generating clean, functional, and visually appealing user interfaces, complete with smooth animations and creative assets. Compared to competitors like Claude 4.5 and GPT 5.1, Gemini 3 demonstrates superior creativity and precision in designing wallpapers, layouts, and interactive elements. For developers and designers focused on visual and interactive design, Gemini 3 sets a new standard. Its ability to create aesthetically pleasing and user-friendly designs makes it an invaluable tool for those in creative industries. By streamlining the design process and offering innovative solutions, Gemini 3 has established itself as a leader in this domain. Coding Capabilities: A Mixed Bag Gemini 3's coding capabilities present a more complex picture. Its 1-million-token context window allows it to handle large datasets and intricate tasks, offering developers a powerful tool for managing complex projects. However, its performance has been inconsistent, particularly when using the Gemini CLI, a command-line interface designed for developers. Critics have described the CLI as clunky and unreliable, limiting its appeal for those working on critical or intricate projects. In comparison, models like Claude 4.5 provide a more stable and predictable coding experience. While Gemini 3 is fast and equipped with advanced features, its struggles with complex implementations highlight a gap between its potential and its practical application. For developers seeking reliability and precision, this inconsistency may be a significant drawback. Innovative Tools and Features Gemini 3 introduces a suite of innovative tools aimed at enhancing the developer experience. These tools reflect Google's ambition to redefine how developers interact with AI, offering features that blend creativity with functionality. Key tools include: * Google Anti-gravity: A customized fork of Visual Studio Code (VS Code) designed to boost productivity and streamline the coding process. * AI-enhanced search: A feature that simplifies information retrieval across Google's ecosystem, allowing users to find relevant data more efficiently. * Vibe coding: A concept emphasizing creativity and intuition in coding, though its practical applications and benefits remain unclear. While these tools showcase Google's innovative approach, their real-world impact is still under evaluation. Developers and professionals may find these features intriguing, but their effectiveness in practical scenarios will ultimately determine their value. Limitations and Challenges Despite its advancements, Gemini 3 faces several challenges that temper its promise. These limitations highlight areas where the model falls short of expectations, raising questions about its broader applicability. Key challenges include: * Inconsistent coding performance: The unreliability of the Gemini CLI undermines its utility for developers working on critical projects. * Overhyped claims: Critics argue that Google's portrayal of Gemini 3 as a fantastic tool is exaggerated, particularly in areas requiring stability and reliability. * Limited broader applicability: While it excels in design-focused tasks, its performance in other domains, such as live coding environments, is less impressive. These challenges suggest that while Gemini 3 is a promising tool, it is not yet the comprehensive solution Google envisions. Its strengths in specific areas are offset by notable weaknesses, limiting its appeal for users seeking a versatile and reliable AI model. A Promising Yet Imperfect Tool Gemini 3 represents a significant step forward in AI development, particularly in UI design and multimodal understanding. Its ability to create visually stunning and functional designs sets it apart from competitors like Claude 4.5 and GPT 5.1. For developers and designers focused on creative tasks, Gemini 3 offers valuable tools and features that enhance productivity and innovation. However, its inconsistent coding performance and overhyped claims limit its broader appeal. While it excels in specific domains, it struggles to deliver the reliability and versatility required for more demanding applications. For those seeking a dependable, all-purpose AI solution, competing models may still hold the edge. Gemini 3 is a promising tool with significant potential, but it remains a work in progress, with room for improvement in key areas.
[61]
Google Unleashes Gemini 3 Pro: The New Benchmark for AI Intelligence
After over seven months of anticipation, Google has officially released its state-of-the-art Gemini 3 Pro AI model. According to Google, Gemini 3 represents "a suite of highly-capable, natively multimodal, reasoning models." With the release of Gemini 3, Google claims it's"taking another big step on the path toward AGI". Talking about the architecture, Gemini 3 Pro is a sparse mixture-of-expert (MoE) model, built on the Transformer architecture. On top of that, Google says Gemini 3 Pro was trained solely on Google's TPUs, which is impressive. Now, coming to benchmarks, Gemini 3 Pro has hit it out of the park. In the challenging Humanity's Last Exam, Gemini 3 Pro achieved 37.5% without any tool use. It even outclassed OpenAI's latest GPT-5.1 model which scored 26.5%. In LMArena, Gemini 3 Pro has taken the first spot with an ELO score of 1,501 points. Next, in the new ARC-AGI-2 benchmark, Gemini 3 Pro got 31.1%, again beating GPT-5.1 which received 17.6% only. In the SWE-Bench Verified, Gemini 3 Pro got 76.2%, nearly matching GPT-5.1's 76.3%. However, in this benchmark, Anthropic's Claude Sonnet 4.5 continues to lead with 77.2%. Google is also working to bring Gemini 3 Deep Think to Google AI Ultra subscribers which scored 41% on Humanity's Last Exam and 45.1% in ARC-AGI-2. In terms of agentic coding, Gemini 3 Pro is the leader in WebDev Arena with 1,487 ELO score. It can do long-horizon, high-level planning to perform multi-step, real-world tasks. Gemini Agent is coming to Google AI Ultra subscribers. Google also introduced a new Antigravity dev platform which is basically an agent-first development environment. It bundles Gemini 3 Pro, Gemini 2.5 Computer Use model, and Nano Banana image generation model. Agents can directly control the editor, terminal, and the browser to plan tasks and execute code. Gemini 3 Pro is rolling out in the Gemini app for everyone, starting today. Pro and Ultra subscribers can use the new model in AI Mode in Google Search.
[62]
Sundar Pichai Introduces Gemini 3 As Google's 'Most Intelligent' AI Model: 'Get What You Need With Less Prompting' - Alphabet (NASDAQ:GOOG), Amazon.com (NASDAQ:AMZN)
On Tuesday, Alphabet Inc. (NASDAQ:GOOG) (NASDAQ:GOOGL) CEO Sundar Pichai unveiled Google Gemini 3, calling it the company's most capable and nuanced AI system so far, as the search giant accelerates its competition with OpenAI's GPT-5. Google Calls Gemini 3 Its Most Advanced AI Model Yet Google introduced Gemini 3 as a major leap forward in its multimodal and agentic capabilities, positioning the model as a smarter and more context-aware successor to Gemini 2.5. "Gemini 3 is our most intelligent model that helps you bring any idea to life," the company said in a blog post. In a post on X, formerly Twitter, Pichai said the upgraded system is designed to help users "get what you need with less prompting" by understanding intent more accurately and handling complex tasks with greater depth. The model begins rolling out on Tuesday to select paid subscribers through the Gemini app, AI Mode in Search and enterprise tools, with broader availability expected in the coming weeks. See Also: Jeff Bezos Was Always Confident That The iPad Was No 'Kindle Killer' And He's Still Turning The Page On Apple: 'You Don't Understand My Audience' Aiming To Reinvent Search With More Visual, Interactive Answers Google said Gemini 3 will power new generative interfaces capable of producing magazine-style explanations, interactive calculators and dynamic layouts featuring images, tables and grids. In a demonstration, Google showed the model explaining Van Gogh's works with contextual visuals and narrative summaries. In the blog post, Demis Hassabis, CEO of Google DeepMind, said Gemini 3 is designed to replace "cliché and flattery" with more honest and insightful responses. A Direct Challenge To OpenAI As Big Tech Ramps Up AI Spending The launch comes as OpenAI continues updating GPT-5. This month, the AI startup has released two improved versions described as "warmer," more capable and better at following instructions. Both companies are pushing aggressively to stay ahead as demand for AI accelerates. Alphabet and other tech giants -- including Microsoft Corp (NASDAQ:MSFT), Meta Platforms, Inc. (NASDAQ:META) and Amazon.com, Inc. (NASDAQ:AMZN) -- expect to collectively reach about $600 billion this year on capital spending. Google Expands Developer Tools With Antigravity Google also unveiled Antigravity, a new agent platform that lets developers build at a higher, task-oriented level. Businesses can integrate Gemini 3 through Vertex AI, where the model can generate onboarding materials, analyze videos and factory images, or support procurement workflows. The Gemini app now has 650 million monthly active users and Google's AI Overviews reaches two billion monthly users. Benzinga's Edge Stock Rankings show GOOGL sustaining a solid growth trend across short, medium and long-term time frames. Click here for a deeper look at how it stacks up against peers and competitors. Read Next: Jim Cramer Says Trump Has Not 'Banned' Nvidia From China: President's Comments Leave A 'Lot Of Latitude' For The Tech Giant And Beijing Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors. Photo Courtesy: Photo Agency on Shutterstock.com AMZNAmazon.com Inc$222.820.12%OverviewGOOGAlphabet Inc$284.70-0.09%GOOGLAlphabet Inc$284.28-%METAMeta Platforms Inc$594.90-0.47%MSFTMicrosoft Corp$492.52-0.26%Market News and Data brought to you by Benzinga APIs
[63]
Google's Gemini 3 AI Models Are Finally Here With These New Features
* Gemini 3 Pro tops the LMArena leaderboard with 1501 Elo * The AI models feature improved frontend coding capability * Google is integrating Gemini 3 in Search's AI Mode Google finally released the Gemini 3 family of artificial intelligence (AI) models on Tuesday. The Mountain View-based tech giant called it the company's most intelligent AI model yet, highlighting that it outperforms its predecessor as well as OpenAI's GPT-5.1 in every single major benchmark. The new AI model brings improvements across various aspects, including reasoning, conversations, coding, mathematics, as well as agentic capabilities. The company highlights that Gemini 3 is a major step forward towards creating complex agentic experiences for users.
[64]
Can Alphabet's Gemini 3 Overtake ChatGPT? | The Motley Fool
Alphabet released Gemini 3 this month. The company claims it offers "PhD-level reasoning" for various tasks, providing context about what you're trying to learn. It's also less prone to "flattery" than its rival. But will it be enough to keep Alphabet at the forefront of AI chatbots? Here's how Gemini 3 is taking on the competition, and how Alphabet is succeeding in the AI space, even if it doesn't overtake OpenAI. Gemini 3 builds on the success of its predecessors and comes just eight months after Gemini 2.5 was released. Alphabet highlighted several of its capabilities in a blog post, claiming that it outperforms all frontier AI models on major benchmarks and features a new "Deep Think" mode that can solve complex problems with "PhD-level" depth. In practical terms, the company states that Gemini 3 can function more like an agent to complete multi-step tasks, such as booking a local service or organizing your email. The company even said that you could give Gemini 3 a recording of yourself playing pickleball, and it could suggest training on how to improve your game. Alphabet CEO Sundar Pichai said in a statement: "It's state-of-the-art in reasoning, built to grasp depth and nuance -- whether it's perceiving the subtle clues in a creative idea, or peeling apart the overlapping layers of a difficult problem." The company says its new version of the chatbot will be "[S]mart, concise and direct, trading cliché and flattery for genuine insight -- telling you what you need to hear, not just what you want to hear." This seemed like a clear jab at ChatGPT, which is known to "flatter" its users. Alphabet also said that Gemini 3 is the company's most powerful vibe coding model yet. This makes it easier than ever for developers to tell the model what they want, producing the code with better visualization and interactivity than before. The latest version of Gemini 3 is no doubt a formidable competitor to ChatGPT. While there were some early concerns that OpenAI's chatbot would render services like Google Search obsolete -- and it's certainly a threat -- Alphabet's Gemini has alleviated some of those fears. Gemini has 650 million monthly active users, and Alphabet says that its AI Overview -- its artificial intelligence responses in Google Search -- has more than 2 billion monthly users. The company states that 70% of Google Cloud customers utilize its AI. Alphabet charges for the most advanced version of its chatbot, both for individuals and enterprise accounts. Most of the company's AI services are reported under the Google Cloud segment, which reported strong growth of 34% in the third quarter to $15.1 billion, beating analysts' consensus estimate of $14.7 billion. Alphabet's advertising revenue jumped 12.6% to $74.1 billion, proving that -- at least for now -- the company can grow ad sales even in the wake of expanding chatbot usage. ChatGPT has 700 million weekly users, so Gemini has a long way to go before it catches up. Whether Gemini 3 overtakes ChatGPT may not ultimately matter. All of the above shows that Alphabet is successful in executing on its AI strategy and is growing its cloud revenue and ad sales as it does so. And with Alphabet implementing Gemini 3 into its latest Search tools and coding platform, the company can lock more people in to its ecosystem. Even better, Alphabet's stock has a price-to-earnings ratio of just 28 right now. That's cheaper than the S&P 500's average of 31,which makes Alphabet one of the few successful, profitable AI companies with a relatively cheap share price. It's no wonder that Warren Buffett's Berkshire Hathaway recently bought $4 billion of this AI stock.
[65]
A new era of intelligence with Gemini 3
Nearly two years ago we kicked off the Gemini era, one of our biggest scientific and product endeavors ever undertaken as a company. Since then, it's been incredible to see how much people love it. AI Overviews now have 2 billion users every month. The Gemini app surpasses 650 million users per month, more than 70% of our Cloud customers use our AI, 13 million developers have built with our generative models, and that is just a snippet of the impact we're seeing. And we're able to get advanced capabilities to the world faster than ever, thanks to our differentiated full stack approach to AI innovation -- from our leading infrastructure to our world-class research and models and tooling, to products that reach billions of people around the world. Every generation of Gemini has built on the last, enabling you to do more. Gemini 1's breakthroughs in native multimodality and long context window expanded the kinds of information that could be processed -- and how much of it. Gemini 2 laid the foundation for agentic capabilities and pushed the frontiers on reasoning and thinking, helping with more complex tasks and ideas, leading to Gemini 2.5 Pro topping LMArena for over six months. And now we're introducing Gemini 3, our most intelligent model, that combines all of Gemini's capabilities together so you can bring any idea to life. It's state-of-the-art in reasoning, built to grasp depth and nuance -- whether it's perceiving the subtle clues in a creative idea, or peeling apart the overlapping layers of a difficult problem. Gemini 3 is also much better at figuring out the context and intent behind your request, so you get what you need with less prompting. It's amazing to think that in just two years, AI has evolved from simply reading text and images to reading the room. And starting today, we're shipping Gemini at the scale of Google. That includes Gemini 3 in AI Mode in Search with more complex reasoning and new dynamic experiences. This is the first time we are shipping Gemini in Search on day one. Gemini 3 is also coming today to the Gemini app, to developers in AI Studio and Vertex AI, and in our new agentic development platform, Google Antigravity -- more below. Like the generations before it, Gemini 3 is once again advancing the state of the art. In this new chapter, we'll continue to push the frontiers of intelligence, agents, and personalization to make AI truly helpful for everyone. We hope you like Gemini 3, we'll keep improving it, and look forward to seeing what you build with it. Much more to come!
[66]
Salesforce CEO Marc Benioff says he won't use OpenAI's ChatGPT again: 'I am not going back, it feels like...'
Salesforce CEO Marc Benioff said he is ditching OpenAI's ChatGPT for Google's newest AI model, Gemini 3. His remarks came after he used Google's latest AI model Gemini 3 and experienced a significant shift, calling it an "insane" leap forward in reasoning, speed, and multimodal capabilities. "Holy shit," Benioff wrote on X on Sunday. "I've used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I'm not going back. The leap is insane -- reasoning, speed, images, video... everything is sharper and faster. It feels like the world just changed, again." Salesforce CEO Marc Benioff said after he used Gemini 3.0, adding that a two-hour experience with Google's new model has convinced him to switch permanently. Google CEO Sundar Pichai termed the Gemini 3 as the company's "most intelligent model, that combines all of Gemini's capabilities together so you can bring any idea to life." ALSO READ: Delhi air quality today: Will the Ethiopian ash cloud worsen national capital's AQI? Check details Marc Benioff's reaction quickly went viral online and garnered more than one million views as of early Monday morning, reports Business Insider. In a post on X, Marc Benioff claimed that Gemini 3.0 has brought a monumental shift in the AI landscape. The CEO highlighted Gemini 3's strong multimodal performance, citing major gains in reasoning, speed, image quality, and video processing as the key factors behind his decision. Google and its DeepMind division unveiled Gemini 3 last week, describing it in a blog post as their "most powerful agentic and vibe coding model yet," capable of generating and understanding text, images, video, and code with tighter integration across the Google ecosystem. Marc Benioff's endorsement underscores the mounting competitive pressure on OpenAI as rival tech giants like Google continue to make major strides in AI development. Tesla CEO Elon Musk and even OpenAI chief executive Sam Altman praised Pichai following the launch of Gemini 3. ALSO READ: Usha Vance breaks silence after being spotted without wedding ring amid online buzz. Here's what the Second Lady of US said Sam Altman, the CEO of OpenAI, Google's biggest rival in the AI race, congratulated the company on Gemini 3's launch, posting on X last week: "Looks like a great model." Altman even noted potential "temporary economic headwinds" in an internal memo but expressed confidence in OpenAl's rapid progress and leadership in the Al race. "We have built enough strength as a company to weather great models shipping elsewhere competition... (so), having most of our research team focused on really getting to superintelligence is critically important," Altman wrote. Former Tesla AI director Andrej Karpathy said on X he had a "positive early impression" of Gemini 3, calling it "very solid daily driver potential" and "clearly a tier 1 LLM." ALSO READ: Delhi public holiday today for Shaheedi Diwas: Are schools, banks and offices closed on Guru Tegh Bahadur's martyrdom anniversary? Stripe CEO Patrick Collison also weighed in, posting on X that Gemini 3 built an "interactive web page summarizing 10 breakthroughs in genetics," which he called "pretty cool." Benioff's endorsement of Gemini 3 is notable, given Salesforce's extensive partnerships across the AI ecosystem -- including with OpenAI and Anthropic -- and reflects how quickly preferences among top tech leaders are shifting as models become faster and more capable. Google's Gemini 3 enters the fray amid escalating competition from OpenAI's ChatGPT 4.5 Turbo and 5, as well as Anthropic's Claude 3.5, with each new release pushing the limits of reasoning and advanced tool use.
[67]
Getting Started with Gemini 3 : Google Al Studio
What if you could unlock the power to solve complex equations, generate Python scripts on demand, or analyze multimedia data, all with a single tool? Enter Gemini 3, Google's innovative AI model designed to transform how developers approach problem-solving and innovation. With its ability to handle tasks ranging from multimodal reasoning to automating workflows, Gemini 3 isn't just another AI tool, it's a fantastic option for anyone looking to push the boundaries of what's possible in software development. Whether you're building smarter applications or streamlining tedious processes, this model offers a level of precision and adaptability that's hard to match. In this overview, you'll discover how Gemini 3's advanced capabilities can transform your development workflow. From exploring its integration with Google AI Studio to using its APIs for custom applications, we'll guide you through the essentials to get started effectively. You'll also gain insights into its standout features, such as configurable "thinking levels" and Python code generation, and learn how to apply them to real-world scenarios. By the end, you'll not only understand what makes Gemini 3 so unique but also feel equipped to harness its potential for your most ambitious projects. After all, innovation begins with the right tools, and Gemini 3 might just be the one you've been waiting for. Gemini 3 is a innovative AI model tailored to address complex challenges while fostering creativity and efficiency. Its standout features include: Currently available as a Pro Preview, Gemini 3 offers early access to its advanced functionalities, allowing developers to explore its potential and refine their projects before full-scale deployment. Google AI Studio serves as the primary platform for exploring and using Gemini 3's capabilities. This interactive environment allows developers to experiment with tasks such as: By providing a hands-on playground, Google AI Studio enables you to familiarize yourself with Gemini 3's features before integrating them into larger projects. For instance, you can input raw data and receive actionable outputs, significantly reducing the time and effort required for manual processing. This platform is particularly useful for testing ideas, refining workflows, and making sure that the model aligns with your project goals. Expand your understanding of Gemini 3 with additional resources from our extensive library of articles. Gemini 3's API integration allows developers to incorporate its advanced capabilities into custom applications. To get started, follow these steps: Using the Python SDK, for example, you can build applications that use Gemini 3's reasoning and coding capabilities. A practical use case might involve creating a tool to classify objects, such as distinguishing between fruits and vegetables, by adjusting the model's "thinking level" to match the complexity of the task. This adaptability makes Gemini 3 a versatile solution for a wide range of development needs. Google Colab is an excellent platform for integrating Gemini 3 into your development workflow. If the Google GenAI SDK is not pre-installed, you can quickly set it up within Colab to begin experimenting. This environment offers several advantages: By combining Colab's collaborative features with Gemini 3's advanced capabilities, you can accelerate development cycles and deliver high-quality solutions more efficiently. Gemini 3's versatility makes it an invaluable tool across various industries and use cases. Some practical applications include: These features empower developers to tackle both straightforward and intricate challenges, making Gemini 3 a valuable asset for innovation in fields such as data science, multimedia analysis, and software development. One of Gemini 3's most notable features is its configurable "thinking level," which allows you to adjust the model's reasoning complexity to suit specific tasks. This ensures optimal performance across a variety of scenarios. For example: Additionally, Gemini 3 is optimized for handling large datasets and multimedia content. By adjusting media resolution, you can balance performance and resource efficiency, making it ideal for tasks that require processing high volumes of data or analyzing detailed visual content. Google AI Studio offers a wealth of resources to support developers in maximizing Gemini 3's potential. These include: Whether you are an experienced developer or new to AI, these resources provide valuable insights and practical guidance. Additionally, engaging with the developer community can foster collaboration and innovation, helping you refine your projects and contribute to the growing ecosystem of Gemini 3-powered applications. Gemini 3 is a powerful tool for developers seeking to harness the latest advancements in artificial intelligence. With its robust features, seamless integration options, and extensive resources, it opens up new opportunities for innovation across a variety of fields. By using its capabilities, you can tackle complex challenges, streamline workflows, and create impactful solutions tailored to your specific needs.
[68]
Google rolls out Gemini 3 with Deep Think mode, enhanced coding and agentic actions
Google has introduced Gemini 3, the next generation of its AI model series, bringing upgrades in long-form reasoning, multimodal interpretation, interface generation, developer tools, and agent-based task execution. The rollout includes Gemini 3 Pro, a new Deep Think mode, expanded multimodal learning workflows, redesigned app capabilities, and the first release of Gemini Agent for multi-step automation. Google and Alphabet CEO Sundar Pichai shared a brief note highlighting the progress of the Gemini program, which began nearly two years ago. He said Gemini has grown into one of Google's largest scientific and product efforts, supported by an integrated full-stack approach combining infrastructure, research, models, and products. Key points he highlighted Pichai said Gemini 3 brings together advancements from earlier versions to deliver deeper reasoning, better intent understanding, and more accurate multi-step interpretation with fewer instructions. He confirmed that Gemini 3 is rolling out across Google's ecosystem -- AI Mode in Search, the Gemini app, AI Studio, Vertex AI, and Google Antigravity -- marking the first time a Gemini release is launching inside Search on day one. He added that Gemini 3 represents the next phase of Google's AI roadmap, with continued focus on intelligence, agentic systems, and personalization in future releases. Google says Gemini 3 Pro delivers stronger reasoning, clearer responses, and better multimodal grounding. Benchmark results include: The model is optimized to avoid generic phrasing, offer more direct answers, and maintain accuracy across text, audio, images, video, and code. Deep Think extends step-by-step reasoning with stronger analytical performance. Deep Think is undergoing extended safety reviews before wider rollout. Gemini 3 supports expanded learning tasks through multimodal understanding and a 1M-token context window, enabling: Gemini 3 improves instruction adherence, zero-shot coding, and agentic coding. Antigravity offers a development environment where Gemini 3 can: This enables parallel, consistent end-to-end software workflows. Google also notes that Gemini 3 delivers its best-ever vibe coding performance inside Canvas, enabling more feature-rich app generation within the workspace. These capabilities are part of the revamped Gemini app, which now includes a "Thinking" model selector and a My Stuff library for saved outputs. Gemini Agent, built using insights from Project Mariner, can break down and execute complex tasks. It can: The agent confirms sensitive actions such as purchases or message sending. Long-horizon planning Gemini 3 strengthens multi-step planning and avoids tool-use drift during long tasks. It leads Vending-Bench 2, designed to test year-long operational decision-making. Gemini 3 includes stronger protections through extensive internal and external assessments. Google says additional Gemini 3 series models will be released soon, and the team says it looks forward to user feedback during the rollout.
[69]
Google launches Gemini 3, its advanced reasoning model yet
Gemini, which the company launched first two years ago, now has 650 million monthly active users and 13 million developers are building on it, Google and Alphabet chief executive Sundar Pichai said. Google on Tuesday launched its most advanced reasoning model, Gemini 3, which will be available globally across its products including Search, AI Studio and Gemini App. The tech giant also launched an agentic development platform, Google Antigravity. Google termed the Gemini 3 launch as a big step towards artificial general intelligence (AGI). Gemini, which the company launched first two years ago, now has 650 million monthly active users and 13 million developers are building on it, Google and Alphabet chief executive Sundar Pichai said. Gemini 3 will be available across products, with more complex reasoning capabilities and across all countries, including India. Gemini Pro and Ultra users will be able to access with higher limits. With its latest partnership with Reliance Jio, users will be able to access Pro features. Google is also seeing increasing traction for its products such as Nano Banana in India. Chris Struhar, vice-president, product, Gemini App, said with partnership with Jio and free access to Gemini suite for students, the firm is seeing how students are using Gemini for homework. Gemini 3 is Google's most intelligent model for multimodal understanding, agentic capabilities and vibe coding, Koray Kavukcuoglu, chief technology officer and chief AI architect at Google DeepMind, said at a global media briefing held virtually. Gemini 3 outperforms its predecessor across AI benchmarks. The model showcases PhD-level reasoning with a 37.5% on Humanity's Last Exam, a benchmark that evaluates models across mathematics, humanities and natural sciences. The latest model also outperforms Gemini 2.5 Pro when it comes to coding. Users can utilise Gemini 3 for coding in Google AI Studio, Vertex AI, and its agentic development platform Antigravity. Gemini 3 will also be available on Cursor and other coding platforms. With the launch of Antigravity, Google is getting into Integrated Development Environment. IDE for short, it is an application that helps developers with writing and managing code efficiently, like Cursor. In response to ET's question about whether Google will be competing with platforms such as Cursor, Kavukcuoglu said the firm would not look at it that way since it is also partnering with Cursor and others very closely in the market. "It's important for us to reach and connect with the users where they are. It is early days in AI development and (it is unclear) how AI impacts different areas and different industries. I think this is important for us to be able to experiment as well. I am sure there will be others who are experimenting, and each product will sort of evolve too," he said. He said Google will continue its partnership with Cursor.
[70]
Gemini 3 Research Agent : Builds Reports While You Sleep
What if you could delegate your most complex research tasks to an AI that not only understands your objectives but also plans, executes, and refines its approach with precision? Enter Gemini 3, a innovative AI model designed to transform the way we approach research. Paired with the versatile Deep Agents harness, this duo doesn't just automate repetitive tasks, it transforms workflows, allowing researchers to tackle long-term planning, coding automation, and structured output generation with unprecedented efficiency. In a world where time is the most valuable resource, the ability to offload intricate processes to a system that learns, adapts, and delivers is nothing short of fantastic. Could this be the future of research as we know it? LangChain explain how Gemini 3 and Deep Agents can work together to create advanced research agents capable of reshaping productivity. You'll uncover the innovative features that make Gemini 3 excel at long-horizon planning and coding, as well as the customizable tools within Deep Agents that streamline even the most demanding workflows. Whether you're a developer seeking to automate terminal-based tasks or a researcher aiming to generate high-quality structured outputs, this guide will show you how these tools can be tailored to meet your unique needs. By the end, you'll understand not just how to build a research agent, but why this innovation is poised to redefine the boundaries of what's possible in modern research. Gemini 3 is designed to excel across a broad spectrum of tasks, consistently delivering high performance on industry benchmarks. Its key features include: These features establish Gemini 3 as a robust and adaptable tool, making it an ideal foundation for building research agents tailored to a variety of needs. The Deep Agents harness complements Gemini 3 by providing an open source framework equipped with tools that simplify and enhance research workflows. Its standout features include: The harness also supports extensive customization, allowing the integration of specialized tools and instructions to meet the unique demands of individual research projects. This flexibility ensures that the framework can adapt to a wide range of applications. Stay informed about the latest in Gemini 3 by exploring our other resources and articles. The combination of Gemini 3 and the Deep Agents harness offers a fantastic approach to research workflows. Together, they automate repetitive tasks, enhance productivity, and ensure precise outputs. Key features that contribute to this streamlined process include: These features work in harmony to optimize research processes, making Gemini 3 and Deep Agents indispensable tools for professionals in both technical and non-technical fields. One of the most valuable aspects of Gemini 3 is its ability to produce clear and structured outputs. Whether you require detailed research reports, task summaries, or properly formatted citations, the system ensures coherence and clarity in its results. Its moderate token usage further enhances its suitability for extensive research applications, striking a balance between performance and cost-efficiency. This capability is particularly beneficial for projects that demand precision and consistency in their deliverables. To begin using the capabilities of Gemini 3 and the Deep Agents harness, a quick-start repository is available. This resource provides step-by-step setup instructions, including examples for both the interactive UI and Python notebook-based workflows. By following these guidelines, you can quickly deploy a research agent tailored to your specific needs and objectives. The repository also includes sample configurations and best practices to help you maximize the potential of these tools from the outset. Gemini 3, when integrated with the Deep Agents harness, offers a comprehensive solution for building advanced research agents. Its combination of long-term planning, coding automation, and task delegation capabilities ensures the efficient execution of complex workflows. Whether managing research projects, generating detailed reports, or automating repetitive tasks, this system provides the tools and flexibility needed to achieve your goals. With its user-friendly interface, customizable features, and strong performance metrics, Gemini 3 and Deep Agents are poised to redefine how research tasks are approached and executed, offering a streamlined and effective pathway to innovation.
[71]
Google launches Gemini 3, embeds AI model into search immediately
Google has introduced Gemini 3, its latest artificial intelligence innovation, which is set to revolutionize products like Google Search from day one. This strategic launch is designed to enhance revenue streams and reinforce Google's dominance in the rapidly evolving AI landscape. New functionalities such as Gemini Agent and Antigravity promise greater efficiency and versatility for users and enterprises alike. Alphabet's Google on Tuesday launched the latest version of its artificial intelligence model Gemini, emphasizing that the new capabilities will be immediately available in several profit-generating products like its search engine. Gemini 3, arriving 11 months after the second generation of the model, appears on paper to keep Google at the forefront of the AI race. During a press briefing, executives highlighted Gemini 3's lead position on several popular industry leaderboards that measure AI model performance. CEO Sundar Pichai described it as "our most intelligent model," in a company blog post. However, the AI race has increasingly shifted away from benchmarks to money-making applications of the technology, as Wall Street watches for signs of an AI bubble. Alphabet's stock has so far been buoyed this year largely due to the financial success from AI offerings from its cloud computing division. But even with leading developers like Google, OpenAI and Anthropic behind them, new AI model updates have had trouble distinguishing themselves, only attracting attention when they fail, as Meta experienced earlier this year. Google emphasized that Gemini 3, unlike past releases, was already underpinning a handful of revenue-generating consumer and enterprise products at launch. "We think Gemini has set quite a new pace in terms of both releasing the models, but also getting it to people faster than ever before," Koray Kavukcuoglu, Google's chief AI architect, told reporters during the briefing. Pichai said the Gemini 3 launch marked the first time that Google had incorporated its new model into its search engine from day one. In the past, new versions of Gemini took weeks or months to embed into Google's most highly used products. Paying users of Google's premium AI subscription plan will have access to Gemini 3 capabilities in AI Mode, a search feature that dispenses with the web's standard fare in favor of computer-generated answers for complicated queries. NEW FEATURES Improvements to Gemini 3 in domains such as coding and reasoning enabled Google to build out a set of new features, both for consumers and enterprise customers. The company debuted "Gemini Agent," a feature that can complete multi-step tasks, such as organizing a user's inbox or booking travel arrangements. The tool brings Google closer to its AI chief Demis Hassabis' vision for a "universal assistant" that has been referred to internally as AlphaAssist, as Reuters previously reported. Google also redesigned the Gemini app to return answers reminiscent of a full-fledged website, a further blow to content publishers who rely on web traffic to generate revenue. Josh Woodward, the vice president in charge of the app, demonstrated to reporters how Gemini can now respond to a query like "create a Van Gogh gallery with life context for each piece" by generating an on-demand interface with visual and interactive elements. For business customers, Google previewed a new product called Antigravity, a new software development platform where AI agents can plan and execute coding tasks on their own. (You can now subscribe to our Economic Times WhatsApp channel)
[72]
What Happens When Jules Gets Gemini 3 Pro? Cleaner Commits, Faster Builds
What if your coding assistant could not only understand your intent but also anticipate your next move, seamlessly adapting to your workflow? With the integration of Gemini 3 Pro into Jules, that vision is no longer a distant dream but a tangible reality. Google's latest AI breakthrough brings unparalleled precision and adaptability to Jules, transforming it into a powerhouse for developers tackling everything from debugging to large-scale system design. Imagine a tool that minimizes oversight, reduces disruptions, and handles intricate, multi-step processes with ease, this is the promise of Gemini 3 Pro, now available in Jules. This feature explores how the fusion of advanced AI reasoning and practical enhancements is reshaping the way developers approach their work. From streamlined multi-interface support to smarter automation of repetitive tasks, Jules is designed to empower developers to focus on innovation rather than logistics. But what makes this update truly new? Beyond the technical upgrades, it's the platform's ability to unify workflows, adapt to diverse coding environments, and evolve alongside its users. As we delve deeper, you'll discover how Jules, powered by Gemini 3 Pro, is setting a new standard for autonomous coding tools, one that prioritizes both productivity and creativity. Google has unveiled the integration of its advanced AI model, Gemini 3 Pro, into Jules, an autonomous coding agent engineered to transform software development workflows. This integration, currently available to Google AI Ultra subscribers, introduces a suite of features designed to enhance workflow efficiency, improve task coherence, and streamline multi-step process management. Pro plan users are expected to gain access soon, marking a pivotal step in the evolution of AI-driven coding tools. The integration of Gemini 3 Pro significantly enhances Jules' ability to tackle complex coding challenges with greater precision and minimal oversight. By using improved reasoning capabilities and intent alignment, Jules ensures a deeper understanding of user instructions. This enables the platform to manage intricate, multi-step workflows while maintaining contextual accuracy, reducing the need for constant user intervention. For developers working on large-scale or intricate systems, this translates to fewer disruptions, more consistent results, and a smoother development process. Gemini 3 Pro's advanced AI capabilities also improve Jules' adaptability to diverse coding environments. Whether you are debugging, refactoring, or building new features, the platform's enhanced intelligence ensures that tasks are completed efficiently and with a higher degree of reliability. This makes Jules a valuable tool for developers seeking to optimize their productivity without compromising on quality. Jules now offers expanded support for terminal, CLI, and API interfaces, providing developers with the flexibility to work in their preferred environments. This multi-interface support ensures seamless transitions between platforms, allowing users to maintain synchronized workflows regardless of their tools or locations. Unified project views further enhance usability by consolidating all project-related information into a single, accessible interface. For developers collaborating across distributed teams, Jules simplifies project management by making sure that updates and changes are reflected in real time. Whether you are coding locally or working with a global team, Jules adapts to your workflow, making it easier to manage projects across diverse tools and environments. This unified approach not only improves efficiency but also reduces the cognitive load associated with juggling multiple platforms. Learn more about Gemini 3 AI by reading our previous articles, guides and features : Jules is designed to simplify the development process by automating repetitive tasks and consolidating essential functionalities into a single platform. By reducing the cognitive burden of managing multiple tools, Jules allows developers to focus on innovation and problem-solving. Enhanced reliability and predictability ensure that daily operations run smoothly, minimizing interruptions and allowing a more productive coding experience. The platform's ability to handle complex workflows with minimal oversight is particularly beneficial for developers managing large-scale projects. By automating routine tasks and maintaining context across sessions, Jules reduces the time and effort required to complete intricate coding operations. This focus on streamlining workflows makes Jules an indispensable tool for developers aiming to maximize their efficiency. Recent updates to Jules have introduced several features aimed at improving both performance and usability. These enhancements are designed to address common pain points in software development while providing developers with tools to work more efficiently: These updates not only enhance the platform's technical capabilities but also improve its overall usability, making it easier for developers to integrate Jules into their existing workflows. Looking ahead, Jules is set to introduce a range of new features designed to further enhance its utility and adaptability. These upcoming updates aim to address emerging needs in the software development landscape: These features reflect Jules' commitment to evolving alongside the needs of modern developers, making sure that the platform remains a valuable asset in an ever-changing industry. With the integration of Gemini 3 Pro, Jules represents a significant advancement in autonomous coding technology. By combining sophisticated AI capabilities with practical platform enhancements, Jules is designed to meet the diverse needs of modern developers. Whether you are managing complex systems, experimenting with new ideas, or collaborating with a distributed team, Jules adapts to your workflow, offering seamless project management and efficient coding. The platform's ongoing updates and focus on usability ensure that it remains at the forefront of AI-driven development tools. As Jules continues to evolve, it is poised to become an indispensable resource for developers seeking to optimize their workflows and achieve greater productivity in their projects.
[73]
Gemini 3 release imminent - here's what to expect from Google's latest release
Gemini 3, Google's new AI model, is quietly rolling out. Early users report impressive performance, exceeding expectations. Businesses like Equifax are seeing significant productivity gains. This suggests Google is focusing on quality over hype. Gemini 3 is expected to compete strongly with rivals. Google is also preparing other AI models for release soon. Google's Gemini 3 launch is apparently imminent. Google's next big AI leap is unfolding quietly, almost cautiously. While the tech world has spent months focusing on Gemini controversies, something very different has been happening in the background. Gemini 3 has begun surfacing in real-world environments without fanfare and early users say the silent rollout speaks louder than any marketing campaign. ALSO READ: Co-Founder fires employees over affair; the move sparks integrity debate in the workplace For months, Google's Gemini ecosystem has been overshadowed by controversies, privacy lawsuits, image-generation misfires, and API changes that left developers frustrated. Critics accused Google of prioritising speed over reliability and racing against OpenAI without proper testing. Yet in November 2025, something unexpected happened. Google rolled out Gemini 3 quietly, without a keynote, launch video, or even a blog post. No hype -- just performance, as per a report by Aim Media House. Gemini 3 first appeared subtly inside Canvas on mobile, where users noticed that the tool began producing higher-quality results than usual. Soon after, comparisons between Canvas mobile (believed to be running Gemini 3) and Canvas desktop (running Gemini 2.5 Pro) started circulating in forums and developer communities. And the reaction was consistent: the model felt dramatically more capable. One Reddit user captured the sentiment bluntly: "Everything here is real and backed up by evidence. This isn't hype." For a community accustomed to disappointment and overpromising, that line stood out. ALSO READ: What does 67 mean, who made the 67 meme and why is it so popular? Across Reddit threads and technical communities, users noticed that traditionally complex tasks were suddenly achievable in a single shot. Testing showed Gemini 3 handling work that usually required several rounds of refinement. This included: * Smooth, functional SVG animations * Clean web designs generated on the first attempt * Accurate 3D physics simulations, complete with gravity and momentum * Touch-interaction logic built correctly without extra prompting ALSO READ: New poll delivers big blow to Trump as approval rating takes sharp dive Developers who had struggled with Gemini 2.5 Pro's inconsistency were the first to flag the dramatic improvement. Without any announcement, Gemini 3's quality did the talking. Beyond enterprise use, benchmark results reflect Gemini 3's maturity. The model achieved gold-medal level performance at the 2025 International Collegiate Programming Contest, a respected test of algorithmic reasoning. On the "Humanity's Last Exam" benchmark, it scored 18.8%, outpacing previous generations. On WebVoyager, a benchmark for real-world web task performance, Gemini reached 83.5% accuracy. These aren't record-breaking scores, but they show consistency where earlier models faltered. Gemini is no longer lagging in reasoning and task completion, it now competes at parity or better, as quoted in a report by Aim Media House. Google's whisper-soft release strategy has sparked its own debate. Some argue the company is avoiding scrutiny. Others believe the silence reflects a shift in culture, one focused on stability over showmanship. This time, the evidence supports the latter. A company that rushes out unfinished products doesn't secure 97% enterprise retention. A model with quality problems doesn't land 83.5% on WebVoyager. And a firm ignoring developer feedback doesn't start testing in small, controlled channels like mobile Canvas or AI Studio. ALSO READ: Trump got a priceless gold clock from the Rolex CEO, and then he cut Switzerland's tariffs to 15% The privacy lawsuit involving default Gemini activation in Gmail still lingers. The image-generation failures and developer frustrations were real. But the November rollout hints at a recalibration, a recognition that enterprise users value reliability more than spectacle, as quoted in a report by Aim Media House. The pattern suggests a shift inside the organisation. For years, Google was accused of overhyping its AI work. Now, its strategy feels quieter, more cautious, more deliberate. Instead of racing to win headlines, Google seems intent on earning back trust through output quality. The company's biggest advantage remains its ecosystem. With 44% market share in productivity suites, Gemini's native integration into Gmail, YouTube, Android, Search, and Chrome gives it pathways no competitor can replicate. In the enterprise world, the most useful model usually beats the most advanced one. Gemini 3's design appears aligned with that reality. It isn't trying to be the flashiest model -- it's trying to be the most functional across Google's products. Google appears to be preparing the public for Gemini 3's official release. The model briefly appeared inside AI Studio, the development environment where students, researchers, and engineers test Gemini models. Reports noted that its presence in AI Studio suggests a rollout is close, possibly hours or days away. AI Studio offers controls such as context length and temperature, making it an ideal environment for quiet phased deployment. Even though Gemini 2.5 Pro still appears as the top model in the interface, users spotted subtle adjustments hinting at the incoming update, as per a report by Techzine. One notable line referenced a tuning detail: Gemini 3 performs best at a temperature of 1.0, while lower settings reduce reasoning quality. That indicates deliberate optimisation for complex tasks. More signs emerged on the Vertex AI cloud platform. A variant listed as Gemini-3-pro-preview-11-2025 was spotted, suggesting Google has multiple configurations under internal testing. These surfacing references point to a coordinated rollout across developer infrastructure, as per a report by Techzine. Your provided information outlined Gemini 3's biggest technical strengths: 1. Advanced Coding Abilities Gemini 3 can recreate entire operating systems -- like MacOS and Windows -- inside a browser. This could reshape software development workflows. 2. Problem-Solving Expertise It has solved mathematical problems previously considered unsolvable, showcasing precision in complex computations. 3. Creative Range It generates: * Functional games * Detailed 3D visualisations * Artistic designs The ECPT variant emerged as the most powerful during internal testing, balancing technical and creative strengths, as per a report by Techzine. According to the reports, Gemini 3 Pro will release on November 18, 2025. Select users have limited early access through AI Studio, making this a phased rollout rather than a single launch moment, as per a report. When will Gemini 3 fully launch? Gemini 3 Pro is scheduled for release on November 18, 2025, with limited early access already appearing in AI Studio. What makes Gemini 3 significant? Its real-world performance, enterprise adoption, coding capability, and improved reasoning place it ahead of earlier Gemini versions, and closer to its main competitors. (You can now subscribe to our Economic Times WhatsApp channel)
[74]
Google unveils Gemini 3, its most ambitious AI model yet, in response to OpenAI
Alphabet, Google's parent company, launched Gemini 3, its new generation of artificial intelligence model, on Tuesday, aiming to compete head-on with OpenAI in the field of generative AI. Dubbed the most advanced model developed by Alphabet, Gemini 3 stands out for its ability to provide more accurate and nuanced responses, while requiring fewer instructions. CEO Sundar Pichai hailed an AI that is "useful rather than flattering," capable of better understanding the user's intentions. The model will be gradually integrated into Google's AI search products, the Gemini app, which has 650 million monthly users, and professional services via Vertex AI and the Gemini API. Google also unveiled Antigravity, a new task-oriented development platform designed for natural language assisted coding. According to Google Labs, Gemini 3 offers a "vibe coding" experience, allowing users to generate interfaces, visualizations, and interactive explanations, similar to a "digital magazine." The tool targets both developers and businesses, offering a variety of uses: creating interactive simulators, analyzing industrial videos, or generating HR content such as onboarding modules. The launch comes at a time of rapid acceleration in investment in artificial intelligence. The digital giants Alphabet, Microsoft, Amazon, and Meta are expected to collectively spend over $380bn this year on their AI infrastructures. Gemini 3 is thus positioned as a direct response to recent developments in GPT-5 at OpenAI, while signaling Google's desire to make its models more assertive and less consensual. With Gemini 3, Google intends to reestablish its technological leadership and spread AI "across its entire infrastructure," Sundar Pichai says.
[75]
Gemini 3 AI Coder : Turn Sketches into Working Apps For Free, No Coding Needed
What if you could turn your wildest app ideas into fully functional realities without writing a single line of code? Imagine sketching out a rough wireframe or typing a simple prompt, only to watch an intelligent system transform it into a polished, ready-to-use application. With the arrival of Gemini 3 Pro Coder, this isn't just a futuristic dream, it's a innovative reality. Google's latest AI marvel combines multimodal intelligence, advanced reasoning, and automation to empower anyone, from seasoned developers to complete beginners, to build virtually anything. And here's the kicker: it's entirely free. In a world where innovative tools often come with a hefty price tag, Gemini 3.0 is breaking barriers and providing widespread access to app development like never before. In this piece, World of AI show how Gemini 3 is redefining what's possible in the realm of coding and app creation. You'll discover its standout features, like live previews that let you see your app evolve in real time and AI-driven code generation that eliminates tedious manual programming. Whether you're looking to build a simple utility, an interactive game, or a complex business solution, Gemini 3.0 adapts to your needs with remarkable ease. But what truly sets it apart isn't just its technical prowess, it's the way it enables creativity and innovation for everyone. Could this be the tool that levels the playing field in app development? Gemini 3.0 distinguishes itself through a suite of innovative features designed to streamline the app development process. These capabilities allow users to focus on creativity and functionality without being hindered by technical complexities. These features collectively make Gemini 3.0 a powerful tool for developers, allowing them to focus on innovation while the platform handles the technical groundwork. At the heart of Gemini 3.0 lies "Build Mode," a feature within Google AI Studio that provides a seamless environment for app creation. This mode enhances the development experience by offering tools and functionalities that simplify the process from start to finish. For instance, you can enhance user engagement by adding a chatbot or provide location-based services by integrating Google Maps. These tools enable the creation of sophisticated applications in a fraction of the time traditionally required. Master Gemini 3 with the help of our in-depth articles and helpful guides. The versatility of Gemini 3.0 makes it suitable for a wide array of use cases, catering to both personal and professional needs. Its adaptability ensures that users from various industries can use its capabilities effectively. These examples highlight the model's potential to drive innovation across diverse domains, from small-scale personal projects to large-scale business initiatives. One of the standout features of Gemini 3.0 is its accessibility. The platform is free to use, requiring only a Google account to get started. This provide widespread access tos app development, allowing individuals, startups, and small businesses to harness the power of advanced AI tools without incurring significant costs. The intuitive design of Google AI Studio ensures that users with minimal coding experience can navigate the platform effortlessly. Features such as live previews and user-friendly interfaces simplify the development process, making it accessible to a broad audience. This inclusivity enables a diverse range of users to bring their ideas to life, regardless of their technical expertise. While Gemini 3.0 offers impressive capabilities, it is essential to consider data privacy and security. As with any AI-driven platform, user data is processed and stored, which may raise concerns for some users. Google emphasizes its commitment to safeguarding user data, but it is advisable to review the platform's privacy policies to ensure they align with your specific needs. Remaining informed about how your data is handled and taking proactive measures to protect sensitive information are crucial steps when using any AI-powered tool. By staying vigilant, users can confidently use Gemini 3.0's capabilities while maintaining control over their data. Gemini 3.0 represents a significant advancement in AI-powered app development. By combining state-of-the-art technology with an intuitive and accessible interface, it enables users to transform their ideas into reality with minimal effort. Whether you're an experienced developer seeking to streamline your workflow or a beginner exploring the world of app creation, Gemini 3.0 offers the tools and flexibility to meet your needs. Its free availability, robust feature set, and adaptability make it a valuable resource for users of all skill levels. As the platform continues to evolve, it holds the potential to redefine how applications are developed, fostering innovation and creativity across industries.
[76]
Gemini 3 Pro Review : Process Huge Jobs in Record Time, Stop Overpaying for Slow AI
What if you could have an AI model that's not only the fastest and most powerful on the market but also surprisingly affordable? Imagine a tool so advanced it can transform a rough sketch into a fully functional website, debug complex code in seconds, or even create interactive educational tools, all with a single prompt. Bold claims, right? But Google's Gemini 3 Pro lives up to the hype, redefining what's possible in artificial intelligence. Whether you're a developer building intricate applications, an educator crafting engaging lessons, or a creative professional designing unique content, this model promises to transform the way you work. It's not just another AI, it's a fantastic option. In the video below World of AI explore how Gemini 3.0 Pro combines multimodal intelligence, advanced reasoning, and seamless integration to deliver unparalleled performance. You'll discover its new ability to process diverse inputs like text, images, and PDFs, and how it excels in tasks ranging from coding to interactive learning. We'll also dive into its real-world applications, from simplifying workflows to fostering innovation across industries. By the end, you'll see why this model is being hailed as the most versatile and efficient AI tool ever created. Could this be the model that finally bridges the gap between imagination and reality? Let's find out. Gemini 3.0 Pro distinguishes itself with its ability to process and integrate diverse inputs such as text, images, sketches, and PDFs, delivering precise and high-quality outputs. Its advanced reasoning and multimodal understanding allow you to transform rough sketches into functional websites or detailed diagrams. Furthermore, its coding capabilities surpass many existing AI models, allowing the creation of applications, games, and simulations with minimal effort. Some of its standout features include: These features make it a versatile tool for professionals across industries, from software developers to educators, allowing them to streamline workflows and achieve more in less time. Gemini 3 Pro's capabilities are validated through its exceptional performance across multiple benchmarks, showcasing its ability to handle tasks requiring precision, depth, and adaptability. Key achievements include: These results position Gemini 3.0 Pro as a leader in AI performance, capable of addressing a wide range of technical and creative challenges with precision and efficiency. Here are more guides from our previous articles and guides related to Gemini 3 Pro that you may find helpful. Gemini 3.0 Pro is not just a theoretical innovation; it delivers practical solutions across various fields, making it a valuable asset for professionals in education, software development, and creative industries. Its ability to convert complex inputs into actionable outputs enhances productivity and fosters innovation. These applications demonstrate the model's ability to enhance workflows, reduce manual effort, and drive innovation across diverse sectors. Gemini 3 Pro integrates effortlessly with popular tools and platforms, making sure accessibility and ease of use for a wide range of users. Key integrations include: These integrations ensure that users can harness the power of Gemini 3 Pro without requiring extensive technical expertise, making it an accessible tool for professionals and organizations alike. Gemini 3.0 Pro's pricing reflects its premium nature, aligning with its focus on high-volume, complex applications. The cost structure includes: While these rates may be prohibitive for smaller-scale projects, they are well-suited for organizations and professionals managing large-scale, automated tasks that demand precision and efficiency. During testing, Gemini 3 Pro demonstrated its ability to generate complex outputs with remarkable accuracy. It successfully created landing pages, browser-based operating systems, and animated SVGs. Its debugging automation further enhances its utility, allowing you to identify and resolve errors efficiently. For example, when developing an application, the model can pinpoint coding issues and suggest actionable solutions, saving significant time and effort. This feature is particularly valuable for large-scale projects where manual debugging would be time-intensive and prone to errors. Gemini 3 Pro excels in multimodal learning and data visualization, converting complex information into digestible formats. Whether you are teaching a class, analyzing data, or presenting findings, its interactive dashboards and visualizations enhance comprehension and engagement. These features make it an invaluable resource for educators, researchers, and professionals seeking to communicate complex ideas effectively and engagingly. Gemini 3.0 Pro is poised to drive significant advancements in AI autonomy and creativity. Its ability to manage large-scale tasks and generate innovative solutions positions it as a versatile partner for learning, building, and planning. As AI technology continues to evolve, Gemini 3.0 Pro represents a critical step forward, empowering users to achieve more with less effort and redefining the boundaries of what artificial intelligence can accomplish.
[77]
Google's Gemini 3 Pro : Builds Browser Game from a Napkin Doodle!
What if your next big idea didn't just stay on paper but came to life in a matter of seconds? Imagine sketching a rough concept for a game on a napkin, only to watch it transform into a fully functional, interactive experience before your eyes. With the release of Google Gemini 3.0 Pro, this is no longer a futuristic fantasy, it's a reality. This innovative AI platform doesn't just assist; it collaborates, turning abstract ideas into tangible creations with a level of precision and speed that feels almost magical. From its multimodal processing to its ability to autonomously code, Gemini 3.0 Pro is rewriting the rules of what artificial intelligence can achieve. Universe of AI uncover how Gemini 3.0 Pro is transforming creative workflows and redefining AI's role in industries like game development, education, and beyond. You'll discover how its agentic AI capabilities empower users to delegate complex tasks, how it bridges the gap between sketches and functional applications, and why it's setting a new standard in reasoning and coding reliability. Whether you're a developer dreaming of faster prototyping or a creator looking to break technical barriers, Gemini 3.0 Pro offers a glimpse into a future where innovation feels effortless. The possibilities are vast, but the real question is: how will you use it? Gemini 3.0 Pro is more than just an incremental update; it is a substantial leap forward in AI technology. Several key features set it apart: These capabilities combine to create a platform that is not only powerful but also practical, offering solutions tailored to the needs of professionals in various fields. The enhanced reasoning capabilities of Gemini 3.0 Pro make it a standout tool for addressing intricate challenges. The model excels at breaking down complex tasks into manageable steps, providing actionable insights that streamline workflows. For professionals, this means greater efficiency in areas such as: By delivering precise and efficient solutions, Gemini 3.0 Pro saves time and effort, allowing users to focus on higher-level objectives. Here are more guides from our previous articles and guides related to Google Gemini 3 Pro that you may find helpful. Gemini 3.0 Pro sets a new standard in multimodal processing by integrating multiple input types into a unified framework. It can interpret and combine: This capability allows users to provide diverse inputs, such as a hand-drawn sketch or a combination of text and images, and receive fully functional outputs. For instance, the AI can generate an interactive application from a simple sketch, complete with suggestions for refinement. This feature is particularly valuable for creating educational content, simulations, and interactive tools, making Gemini 3.0 Pro a versatile asset for professionals in creative and technical domains. For developers, Gemini 3.0 Pro introduces a new feature known as "vibe coding." This functionality captures the essence of creative ideas and translates them into modular, clean code. Imagine sketching a concept for a browser game and having the AI generate a fully operational version in seconds. This capability not only accelerates development timelines but also lowers technical barriers, allowing creators to bring their visions to life with minimal effort. By automating routine coding tasks, Gemini 3.0 Pro enables developers to focus on innovation and refinement. Gemini 3.0 Pro consistently outperforms its competitors, including GPT-5.1, in critical areas such as: Additionally, it excels in specialized tasks like screen understanding and Optical Character Recognition (OCR). These benchmarks highlight its ability to handle complex workflows with professional-grade reliability. For users, this translates into a dependable AI partner capable of delivering consistent and high-quality results. One of the most innovative aspects of Gemini 3.0 Pro is its agentic AI capability. Unlike traditional reactive models, this feature enables the AI to operate autonomously and proactively. It can: By automating routine or repetitive tasks, Gemini 3.0 Pro allows users to focus on strategic decision-making and creative problem-solving. For example, it can generate customized simulations or interactive tools, freeing up time for more complex and value-driven activities. The practical applications of Gemini 3.0 Pro are vast and varied. During live demonstrations, the model successfully transformed a simple sketch into a fully functional browser game, complete with deployment instructions and customization options. This capability is particularly beneficial for: Its ability to bridge the gap between concept and execution makes it an invaluable tool for professionals aiming to innovate efficiently and effectively. As Gemini 3.0 Pro continues to evolve, its capabilities are expected to expand, further enhancing its utility and impact. Future updates are likely to focus on refining its reasoning, multimodal processing, and agentic features, solidifying its position as a leader in AI-driven innovation. Whether you are a developer, educator, or creator, Gemini 3.0 Pro offers a glimpse into the future of artificial intelligence, where collaboration and creativity are seamlessly integrated into everyday workflows.
[78]
Google Gemini 3 launch gets attention from Elon Musk and Sam Altman: Here is what they said
According to Google CEO, Gemini 3 is the company's "most intelligent model." Google has introduced Gemini 3, the latest version of its artificial intelligence model, nearly two years after revealing the first Gemini system. That original version was built in response to the rapid rise of OpenAI's ChatGPT, which kicked off a major shift in the tech world toward advanced AI tools. To mark the new release, Google CEO Sundar Pichai posted a short but excited message on X. His one-word post simply read, "Geminiii," showing his enthusiasm for the company's newest AI model. In a blog post, Pichai also called Gemini 3 Google's "most intelligent model." The launch quickly drew reactions from two of Pichai's biggest rivals in the AI race: Elon Musk, who runs xAI, and Sam Altman, CEO of OpenAI. Musk replied to Pichai's post with a brief message: "Congrats," notably without any emojis. Altman also posted on X, "Congrats to Google on Gemini 3! Looks like a great model." Well, these statements highlight that even as the tech giants compete fiercely, they also recognise each other's breakthroughs as milestones for the entire industry. Also read: Google launches Gemini 3 with big upgrades in reasoning, multimodal performance and agentic tools: All details According to Google, Gemini 3 brings major improvements in three main areas: reasoning, multimodal understanding and autonomous task execution. The model can be used across several Google services, including Search, the Gemini app, and developer tools. The first model in this new series is the Gemini 3 Pro. Google says it performs better than Gemini 2.5 Pro on important industry benchmarks. These include benchmarks like LMArena, Humanity's Last Exam, GPQA Diamond, and MathArena Apex. Google has also introduced Gemini 3 Deep Think, a version focused on tougher reasoning and complex problem-solving. The company plans to roll out this mode to Google AI Ultra subscribers after it completes more safety checks.
[79]
Google launches Gemini 3 with big upgrades in reasoning, multimodal performance and agentic tools: All details
The launch includes Google Antigravity, a new agent-first development platform built to automate complex coding and software tasks. After much anticipation, Google has finally announced the next version of its flagship artificial intelligence model, Gemini 3. According to Google's blog post, the new model improves reasoning, multimodal understanding, and autonomous task execution and is available for use across multiple Google products, including Search, the Gemini app, and developer platforms. The Gemini 3 Pro, which was released in preview, is the first model in the new series. Google claims it outperforms its predecessor, Gemini 2.5 Pro, in key industry benchmarks. The company reported high scores on LMArena, Humanity's Last Exam, GPQA Diamond, and MathArena Apex, as well as improved performance in multimodal assessments such as MMMU-Pro and Video-MMMU. The model also received a high accuracy rating from SimpleQA Verified. The company has also released a new version, Gemini 3 Deep Think, which is designed for more advanced reasoning tasks. Early testing by the company shows that it performs better on complex problem-solving benchmarks such as Humanity's Last Exam and ARC-AGI 2. Google stated that this mode will be made available to Google AI Ultra subscribers following additional safety checks. For the first time, Google is including its newest model in Search at launch. AI Mode in Search will now use Gemini 3 to create more dynamic layouts, interactive elements, and detailed visual explanations. The model is also accessible via the Gemini API, Google AI Studio, Vertex AI, the Gemini CLI, and a new agent-focused development environment called Google Antigravity. Antigravity allows developers to use AI agents to plan, write, and validate code within an integrated workspace. The platform includes Gemini 3 Pro, Google's most recent computer-use model for browser-based actions, as well as an updated image editing model. According to Google, Gemini 3 also improves long-term planning. The model received the highest score in the most recent Vending-Bench 2 test, which assesses an AI system's ability to maintain consistent decision-making over extended tasks. These capabilities are also being made available to end users via Gemini Agent within the Gemini app. Google also announced that additional models in the Gemini 3 lineup will be released in the coming weeks.
Share
Share
Copy Link
Google launches Gemini 3, its most advanced AI model yet, featuring enhanced reasoning capabilities, improved factual accuracy, and record-breaking benchmark scores. The release includes Antigravity, a new AI-first coding environment, and deeper integration with Google's product ecosystem.
Google has officially launched Gemini 3, marking a significant milestone in the company's artificial intelligence development journey. The new model represents Google's most advanced AI system to date, featuring enhanced reasoning capabilities, improved factual accuracy, and unprecedented performance across multiple benchmarks
1
2
.
Source: ET
The release comes just seven months after Gemini 2.5 and represents Google's response to the intensifying competition in the AI space, arriving less than a week after OpenAI's GPT 5.1 release and two months following Anthropic's Sonnet 4.5 launch
2
.Gemini 3 has achieved remarkable results across various industry benchmarks, establishing new performance records in multiple categories. The model scored an impressive 37.5 percent on Humanity's Last Exam, a challenging assessment designed to test PhD-level knowledge and reasoning across mathematics, science, and humanities
4
. This score significantly surpassed the previous record holder, OpenAI's GPT-5 Pro, which achieved 26.5 percent4
.The model has also claimed the top position on the LMArena leaderboard with an ELO score of 1,501, beating its predecessor Gemini 2.5 Pro by 50 points
1
. In coding capabilities, Gemini 3 achieved a remarkable 76.2 percent success rate on the SWE-bench Verified test, which evaluates a model's ability to generate functional code1
.Addressing one of the persistent challenges in AI development, Google claims Gemini 3 shows substantial improvement in factual accuracy. The model scored 72.1 percent on the 1,000-question SimpleQA Verified test, setting a new record for factual correctness
1
. While this means the model still produces incorrect answers nearly 30 percent of the time, Google emphasizes this represents significant progress in addressing hallucination issues that have plagued large language models."With Gemini 3, we're seeing this massive jump in reasoning," said Tulsee Doshi, Google's head of product for the Gemini model. "It's responding with a level of depth and nuance that we haven't seen before"
2
.Alongside Gemini 3, Google unveiled Antigravity, a revolutionary AI-first integrated development environment designed to transform how developers write and debug code. The platform combines a ChatGPT-style prompt interface with command-line functionality and browser integration, enabling multi-pane agentic coding similar to tools like Cursor 2.0
2
.
Source: Google
"The agent can work with your editor, across your terminal, across your browser to make sure that it helps you build that application in the best way possible," explained DeepMind CTO Koray Kavukcuoglu
2
. This approach exemplifies what Google calls "vibe coding," where developers describe their goals in natural language and let the AI assemble the necessary interface or code.Related Stories
Gemini 3 introduces enhanced visual output capabilities, automatically generating diagrams, animations, and interactive visualizations when it determines visual elements would be more effective than text-based responses
3
. The model can create "magazine-style" layouts complete with photos and interactive modules that invite user input for further customization3
.Google is integrating these capabilities directly into Search, where queries about complex topics like physics problems will prompt Gemini 3 to generate custom interactive visualizations automatically
5
. The company reports a 70 percent increase in visual search usage, demonstrating growing user adoption of AI-enhanced search features5
.
Source: Axios
With over 650 million monthly active users of the Gemini app and 13 million software developers incorporating the model into their workflows, Google's AI platform has achieved significant market penetration
2
. The company is positioning Gemini 3 as a hedge against potential AI market volatility, with CEO Demis Hassabis noting that Google's integration strategy across existing products provides protection even if the AI bubble bursts5
.However, experts caution about interpreting benchmark improvements. "If a model goes from 80 percent to 90 percent on a benchmark, what does it mean?" questions Luc Rocher at the University of Oxford. "There is no number that we can put on whether an AI model has reasoning, because this is a very subjective notion"
4
.Summarized by
Navi
[3]
[4]
24 Nov 2025•Business and Economy

18 Nov 2024•Technology

17 Jan 2025•Technology

1
Science and Research

2
Technology

3
Policy and Regulation
