4 Sources
4 Sources
[1]
Google Antigravity is an 'agent-first' coding tool built for Gemini 3
Alongside today's announcement of Gemini 3 Pro, Google has revealed Antigravity, a development tool that uses Gemini 3 Pro, along with other third-party models. Google says that Antigravity, which supports multiple agents and gives them direct access to the editor, terminal, and browser, is designed for an "agent-first future." One of the key components of Antigravity is how it reports on its own work. As it completes tasks, it will produce what Google calls Artifacts: task lists, plans, screenshots, and browser recordings that are intended to verify both the work it's done and what it will do. Antigravity will also report on its actions and external tool use along the way, but Google says that Artifacts are "easier for users to verify" than full lists of a models' actions and tool calls. Antigravity's other big change is that it offers two main usage views. The default Editor view offers a familiar Integrated Development Environment (IDE) experience, similar to rivals like Cursor and GitHub Copilot, with an agent in a side panel. The new Manager view is instead designed for controlling multiple agents at once, allowing each to work more autonomously. Google compares it to "mission control for spawning, orchestrating, and observing multiple agents across multiple workspaces in parallel." Google has introduced more ways to give feedback to AI agents as they work, with the ability to leave comments on specific Artifacts for an agent to take into account without breaking up its work to do so. The company also says that agents in Antigravity will be able to "learn from past work," retaining specific snippets of code or the steps required to carry out certain tasks. Antigravity is available in a public preview now, compatible with Windows, macOS, and Linux. It's free to use, with what Google calls "generous rate limits" for Gemini 3 Pro, though it also supports Claude Sonnet 4.5 and OpenAI's GPT-OSS. Google says rate limits refresh every five hours, and that only "a very small fraction of power users" will ever hit the limits.
[2]
Google's Antigravity puts coding productivity before AI hype - and the result is astonishing
Screenshots, recordings, and browser testing power agent workflows. Google today announced a new(ish) programmer's development environment called Antigravity. The company calls it "a new era in AI-assisted software development." And, from a first look I took at its functionality via this video, it well might be. At least for some things. Also: Google's Gemini 3 is finally here and it's smarter, faster, and free to access Some aspects of Antigravity are definitely astonishingly good. It has some features that I think can truly help move your agent-assisted programming forward in very productive ways. But let's bring this announcement back to earth for a minute. Although the company never mentioned it in its blog announcement or online demos, Antigravity is a fork of Microsoft's open-source VS Code. You can tell from the screenshots I pulled from the demo. This is not a bad thing. In fact, I think it's fairly fantastic, because it means that while Google is adding some powerful new agentic features, it's all wrapped up in an environment most coders are very familiar with. When I first used VS Code with OpenAI's Codex, it was powerful, indeed. I completed a tremendous amount of coding in a very short time. Codex occupied the right pane of the three-pane interface (the other two panes were a file browser and a code editor). I was able to use CleanShot X to grab screenshots, and paste them right into VS Code for Codex to see. In fact, I found the ability to supply the AI with screenshots to be, by far, one of the most powerful tools for agentic coding. Also: Google's Gemini 3 is finally here and it's smarter, faster, and free to access But that was then; this is now. Antigravity can take its own screenshots. It can also capture screen recordings. Not only that, you can use Antigravity to make comments on the screenshots and screen recordings to guide the Gemini 3 LLM on what you want changed. But wait, there's more. Antigravity includes a Google Chrome extension that enables the AI to run your code within a real Chrome instance, test it, observe its behavior, and then take action. That's some next-level stuff right there. To be clear, the browser integration features are limited to browser-only applications. While the AI might be able to test the WordPress plugins I worked on earlier (because WordPress also runs in the browser), it wouldn't be able to test, for example, an iOS app for the iPhone. Also: Google Vids premium features now available to everyone - here's everything you can do However, the ability of the IDE's agents to interact deeply with the look and feel of a web app in real-time could lead to a tremendous boost in productivity. When you set up Antigravity, you can determine the level of autonomy to grant the agent. Google has raised the prominence of the AI chatbot pane in VS Code, er, Antigravity. The Home screen of Antigravity isn't the file browser or the editor; it's the chatbot interaction screen. This section of the interface, known as the Manager surface, actually becomes an agent dashboard where you can invoke and track numerous agent processes from a single location. The company describes it as the interface for spawning, orchestrating, and observing multiple agents across multiple workspaces in parallel. Also: Google Brain founder Andrew Ng thinks you should still learn to code - here's why That last sentence is worth a moment of deconstruction. A workspace in VS Code is merely a grouping of files, usually for one project. For example, I routinely have one workspace dedicated to whatever WordPress plugin I'm working on, and another to a completely different Python project I'm coding. With Antigravity's ability to manage multiple agents across multiple workspaces, you can have multiple projects going at once, all with different agents carrying out different tasks. On the one hand, this could be very powerful. On the other hand, the context switching on the part of our very organic human brains could prove challenging. I find it challenging to work on two programming projects simultaneously. I often switch between projects from one day to the next. But switching back and forth dynamically between projects throughout the workday could lead to errors or brain melt. Still, it's available and lets you optimize for your working style. This is a small thing, but it's cool. When Google demoed the functionality of Antigravity, the presenter wanted a logo for his app. Right from within Antigravity, he asked Gemini to create an image using Nano Banana. Nano Banana is an impressive image generator inside Flash 2.5 (and now, presumably, Gemini 3). So it's no big surprise to see it crank out some attractive logos. Also: Google's Private AI Compute promises good-as-local privacy in the Gemini cloud But up until now, we've seen some logical walls between AI implementations. The coding chatbots are more coding-focused and less conversational. The chatty chatbots have fewer coding agent capabilities. However, Antigravity was able to invoke the Nano Banana capabilities to create the logo directly within the Antigravity IDE interface. UX design often requires creating a multitude of small graphic elements. Normally, we'd switch out of the IDE, drop into a graphics program, generate the files, and upload the files. Lather, rinse, repeat. However, since Antigravity can do it all from within the agent management context inside the IDE, the process can save a bunch of steps. Saving steps is something every professional programmer needs to do. Most coding agents provide some kind of pre-execution plan and post-completion summary of actions taken. That's not new. Antigravity does the same thing. But what's impressive about Antigravity is that because it has browser interactivity and screen recording built in, it can demonstrate its actions in a screen recording. Let's say you ask it to implement a new feature. At the end of its processing run, the agent can show the steps it took to build the feature. However, it can also provide a screen recording that demonstrates how it tested the new feature and what it saw on the screen. Also: The best free AI for coding - only 3 make the cut now That lets you take a quick look at what got produced. But because Antigravity provides an easy mechanism to add Google Docs-like comments to code snippets, screenshots, and screen recordings, you can actually mark up the walkthrough and show the AI what you want to change. I don't think it's possible to overstate how beneficial that can be to productivity. I haven't had a chance to use Antigravity yet, but I will. The fact that it's VC Code-based makes it an easy consideration. That provides the new IDE with an enormous library of plugins and extensions right out of the gate. I think most professional programmers would be far more inclined to try yet another VS Code fork (which they know can be integrated with their projects and workflows) than some brand-new, barely out of beta, AI-focused IDE. Productivity matters to pro coders. What impresses me about Antigravity is that many of its features are purely productivity-focused. Yes, they work with the AI agents and make describing work to AI agents easier, but it's productivity that drives these features, not AI hype. That's a strong approach. Also: Microsoft's new AI agents won't just help us code, now they'll decide what to code I'm just a little disappointed that Google didn't fess up right out of the gate that this is really a VS Code mod. I think more programmers would have been immediately interested in seeing what was done. But that's water under the bridge. I've reached out to Google, because one thing I would like to get clarification on is where Jules (Google's agent-first coding AI) fits into this puzzle. Jules works mostly on your GitHub repo, whereas Antigravity clearly works on local code, although it's able to send commits back to GitHub. Does Jules work with Antigravity, or is Antigravity supplanting Jules, or are they still just two different paradigms? At any rate, this looks like it could be a winner. Have you tried any AI-assisted development environments yet? If so, how do they compare to your normal workflow? Would the ability to capture and annotate screenshots or recordings right inside the IDE change how you develop or debug? How important is deep browser-level testing to your own projects, and would you trust an agent to operate across multiple workspaces at once? Finally, does Google's decision not to emphasize its VS Code origins matter to you, or is capability all that counts? Let us know in the comments below.
[3]
Google just made its own Visual Studio Code
Google just released Antigravity, a brand new agent-first development platform that was announced alongside the Gemini 3 Pro model. This is an integrated development environment, or IDE, with a chatbot that takes the lead on complex, multi-step tasks. The whole thing looks like a fork of Microsoft's Visual Studio Code, because the icons and interface are almost a copy. Antigravity is Google's answer to other AI-powered Integrated IDEs, like Cursor and GitHub Copilot. The core idea is that you act as the architect, delegating complex, end-to-end software tasks to intelligent agents that can operate across your editor, terminal, and even the web browser. I would say this is a smart move for getting the product out quickly, even if it feels a bit lazy. It immediately lowers the barrier to entry because many developers know how to navigate VS Code already. The default is the familiar Editor view, which places the AI agent in a side panel, much like other competitors. The real difference is in the Manager view. This view is specifically for controlling multiple agents simultaneously, letting them work autonomously and in parallel across different workspaces. Google describes this as a "mission control" for orchestrating a fleet of specialized agents. As an agent completes tasks, it produces "Artifacts." These aren't just lists of every action the model took; they are summaries like task lists, plans, screenshots, and browser recordings that verify the work that has been done and what the agent plans to do next. Feedback is also handled differently on this platform. This lets you leave comments directly on specific Artifacts. The agent takes this feedback into account without having to stop its current work, which is really great and is a problem on Gemini and NotebookLM. The agents are also designed to learn from past work, retaining crucial code snippets or steps required for certain recurring tasks. I'd say that is a big issue for Gemini because one of the reasons I never used it for coding help was that it kept making the same mistakes in a cycle. It makes the initial mistake, tries to fix it, then tries to fix that, and then makes the same mistake again. Still, Google Antigravity seems to have its own issues. When I tried loading it up, it wouldn't let me sign in despite verifying it. I pay for the AI subscription, so this wasn't a free-tier issue. Based on the replies to its social media post, I'm not the only one with this issue. The new tool is built around Gemini 3 Pro, which Google claims excels at agentic workflows and complex coding tasks. This model helps users do what Google calls "vibe coding," where developers can translate a high-level idea or natural language prompt into an LLM. Interestingly, Antigravity isn't locked down to just Google's ecosystem. While it heavily features Gemini 3 Pro, it also supports third-party models like Anthropic's Claude Sonnet 4.5 and OpenAI's GPT-OSS. I think this choice is top-tier for developers, because it gives them options and prevents immediate vendor lock-in. Antigravity is currently available in a public preview, and you can download it for Windows, macOS, and Linux. Google is offering the platform for free during the preview, complete with what it calls "generous rate limits" for Gemini 3 Pro. I saw some comments that show users have already started hitting the quota limit surprisingly fast, with one after three prompts. Source: Google Blog
[4]
Could Google's Antigravity spell the end of manual coding?
Meet the autonomous platform that handles testing, fixing, and writing the entire software stack. What's happened? The future of software development just took a giant leap forward, with Google officially unveiling its breakthrough Antigravity platform, launched right alongside the debut of the powerful Gemini 3 model. Antigravity isn't merely another clever tool to help programmers type faster; Google is pitching this as an entirely new class of digital coworker. Instead of just suggesting the next line of code, this platform acts as an AI team leader, orchestrating multiple intelligent agents to manage complex software tasks. It is fundamentally transforming the digital workbench where programmers do their work into a dynamic, "agent-first" environment designed for delegation. * Antigravity is an autonomous development system that uses multiple AI agents simultaneously to plan, write, test, and fix entire code features based on simple instructions. * The system's brain is the powerful Gemini 3 Pro model, leveraging its advanced reasoning to tackle long, multi-step coding problems. * The AI operates across all parts of the coding environment, such as the editor, the command line, and even the web browser, acting as a single, unified entity. Why this matters: This platform matters because it changes the developer's job description. Instead of spending hours writing boilerplate code or chasing frustrating bugs, a programmer can now act as a high-level architect, telling the AI exactly what feature to build and letting it handle the execution. Google is making a direct bid to dominate the next generation of coding by prioritizing end-to-end autonomy and building trust in the AI's output. This launch signals a serious industry shift: * True Autonomy vs. Assistance: Current tools are best described as super-smart helpers; Antigravity aims to be the fully independent programmer, doing the work for you. * Verifiable Work: The system generates "Artifacts", like task lists and screen recordings of the work being done, giving human developers proof of work and full transparency. * Higher-Level Focus: By taking over the tedious, repetitive work, Antigravity frees up human developers to focus their creativity on the truly innovative and strategic parts of an application. Why should I care? For the everyday user, this means the software and apps you rely on will likely get new features and performance updates at a blistering pace. For developers, this means the shift from meticulous, line-by-line debugging to what can only be described as "vibe coding," where you only need to provide the high-level intent. For anyone with a great idea, Antigravity dramatically lowers the barrier to entry, potentially making you a one-person development studio with just a high-level prompt. * Faster Feature Delivery: Companies will spend less time debugging and more time shipping, meaning you get access to better apps much sooner. * Developer Empowerment: Small teams or solo creators can now compete with massive companies, as they can automate complex coding tasks previously requiring an entire engineering department. * Free to Try: The core platform is available right now as a free public preview for Windows, Mac, and Linux, meaning you can dive in today and see what it can build for you. Okay, so what's next? Antigravity's debut intensifies the war for the developer's attention, squarely challenging other agentic ambitions from giants like OpenAI's platform and even more specialized tools like Cursor. Since Google allows its platform to utilize models from competitors, this will drive an intense and rapid feature competition across the entire AI ecosystem, forcing everyone to elevate their game. The key thing to watch is how quickly real-world developers adopt this new, autonomous workflow. Antigravity isn't just about writing code faster; it's about giving creators the ability to delegate development and bring their biggest ideas to life without delay. If you have an idea ready to fly, now is the time to see if Google's AI platform can lift it off the ground.
Share
Share
Copy Link
Google unveils Antigravity, a revolutionary AI-powered development environment that enables multiple autonomous agents to handle complex coding tasks. Built on Gemini 3 Pro and forked from VS Code, it transforms developers into architects who delegate end-to-end software development to intelligent AI agents.
Google has unveiled Antigravity, a groundbreaking development environment that represents what the company calls "a new era in AI-assisted software development."
1
Launched alongside the announcement of Gemini 3 Pro, Antigravity is designed as an "agent-first" platform that fundamentally transforms how developers approach software creation.
Source: Digital Trends
The platform enables multiple AI agents to work autonomously across various development environments, including the editor, terminal, and web browser. Rather than serving as a simple coding assistant, Antigravity positions itself as a comprehensive digital workforce that can handle complex, multi-step programming tasks with minimal human intervention.
While Google presents Antigravity as revolutionary, the platform is actually built as a fork of Microsoft's open-source Visual Studio Code.
2
This strategic decision provides immediate familiarity for developers already comfortable with VS Code's interface and functionality.The choice to build upon VS Code's foundation allows Google to focus on implementing powerful agentic features while maintaining an environment that millions of developers already know how to navigate.
3
This approach significantly lowers the barrier to entry for adoption.Antigravity offers two distinct usage modes tailored to different development workflows. The default Editor view provides a traditional Integrated Development Environment experience, similar to competitors like Cursor and GitHub Copilot, with an AI agent operating from a side panel.
The innovative Manager view represents Antigravity's most distinctive feature, designed specifically for controlling multiple agents simultaneously. Google describes this interface as "mission control for spawning, orchestrating, and observing multiple agents across multiple workspaces in parallel."
1
This allows developers to manage multiple projects concurrently, each with dedicated AI agents handling different aspects of development.One of Antigravity's key innovations is its approach to work verification through what Google calls "Artifacts." As agents complete tasks, they generate comprehensive documentation including task lists, plans, screenshots, and browser recordings.
1
These Artifacts serve as verification tools, making it easier for developers to understand and validate the AI's work compared to traditional action logs.The platform also introduces sophisticated feedback mechanisms that allow developers to leave comments directly on specific Artifacts. This enables continuous guidance without interrupting the agent's workflow, addressing a common frustration with current AI coding tools.
Antigravity's browser integration capabilities represent a significant advancement in AI-assisted development. The platform includes a Google Chrome extension that enables AI agents to run code within actual browser instances, test functionality, observe behavior, and take corrective action.
2
This real-time testing capability allows agents to capture screenshots and screen recordings independently, then use developer comments on these visual elements to guide improvements. While currently limited to browser-based applications, this feature could dramatically boost productivity for web development projects.
Related Stories
Despite being built around Gemini 3 Pro, Antigravity maintains compatibility with third-party AI models including Anthropic's Claude Sonnet 4.5 and OpenAI's GPT-OSS.
3
This flexibility prevents immediate vendor lock-in and provides developers with options based on their specific needs and preferences.The platform is currently available as a free public preview across Windows, macOS, and Linux systems. Google offers what it describes as "generous rate limits" for Gemini 3 Pro usage, with limits refreshing every five hours. However, some early users report hitting quotas faster than expected, with rate limiting occurring after just a few prompts in some cases.
3
Antigravity represents a fundamental shift from AI-assisted coding to truly autonomous development.
4
Rather than helping developers write code faster, the platform aims to transform developers into high-level architects who delegate complex implementation tasks to AI agents.This approach could dramatically lower barriers to software development, potentially enabling small teams or individual creators to compete with larger development organizations. The platform's emphasis on "vibe coding" - translating high-level ideas into functional software through natural language prompts - suggests a future where technical implementation expertise becomes less critical than strategic thinking and creative vision.
Summarized by
Navi
[2]
[3]
[4]
10 Apr 2025β’Technology

25 Feb 2025β’Technology

25 Jun 2025β’Technology

1
Technology

2
Business and Economy

3
Business and Economy
