3 Sources
3 Sources
[1]
What Is Vibe Coding? How Anyone Can Make Apps With the Help of AI
Barbara is a tech writer specializing in AI and emerging technologies. With a background as a systems librarian in software development, she brings a unique perspective to her reporting. Having lived in the USA and Ireland, Barbara now resides in Croatia. She covers the latest in artificial intelligence and tech innovations. Her work draws on years of experience in tech and other fields, blending technical know-how with a passion for how technology shapes our world. When ChatGPT arrived in late 2022, it kicked off an AI boom that hasn't stopped since and showed how powerful natural-language tools could be. Since then, we've seen chatbots, copilots and AI agents move into everyday tech. Vibe coding describes a new way of building software where you prompt an AI model with a line of text and it generates most of the code. Even people with zero programming experience can create apps and full websites by describing what they want in natural language, aka "vibe coding." Andrej Karpathy, an AI researcher and Tesla's former director of AI and a member of OpenAI's founding team, coined the term in early 2025, describing it as a workflow where you "fully give in to the vibes" and stop worrying about the code itself. The phrase spread quickly and was so generally accepted across developer circles that Collins Dictionary named vibe coding its Word of the Year. Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. Y Combinator data from the Winter 2025 batch also shows how quickly things are shifting. About 25% of the startups had codebases built almost entirely by AI. How vibe coding works Vibe coding turns software development into a conversation. Instead of typing functions, arranging files or building components, you describe your idea to an AI tool using plain language. You might say, "I want to create a skincare blog with a homepage, an articles page and a basic editor so I can add new posts." The AI generates the framework, logic and user interface. You open the project, test it, see what works and refine your prompt to make adjustments. You repeat the cycle until you're happy with the results. In this intention-driven development, you focus on the idea and the AI model handles most of the implementation. Where beginners get stuck That doesn't mean anyone instantly knows what to do with raw code. Even though vibe coding removes the need to understand syntax, it doesn't remove the need for basic computer literacy. A beginner won't automatically know where code goes or what to do with it. Vibe coding replaces the technical knowledge of how to write software but not the procedural know-how of how to operate the tools that run it. These platforms simplify the process, but newcomers still need step-by-step guidance for basic actions, like creating a project, opening the correct file, pasting in the code and previewing the result. Sam Dhar, former engineering leader at Adobe and Amazon Alexa and now leading AI Platform at Galileo AI, tells CNET that someone has to always work to evaluate it and very carefully understand what was produced and make decisions on it and then change things and move them around. "Only someone who has that knowledge and experience can truly effectively use AI to be able to build things that are production-ready," Dhar says. Dhar describes real software as a pyramid of decisions, from tiny UI choices like a button color and shape, to high-level questions like who the app is for and how many users it should handle. In his view, you still need teams beneath a lead architect, because not every decision can be spelled out in one giant prompt to a model. Tools that support vibe coding ChatGPT, Claude, Gemini, Grok, Cursor and GitHub Copilot Workspace are some of the many tools you can use to get code with limited free use. With those tools, you can also generate entire apps, fix bugs, extend features and rewrite codebases using natural language. If you ask one of those AI tools to create code for you, you will still need to know what to do with that code, like how to copy and paste it into a text editor, then save it into a file (.html or .py) and run it on your own computer. This can be a hurdle if you have zero programming knowledge. Platforms like Bolt and Replit simplify these steps because you no longer paste code anywhere. The AI chat interface generates the entire project inside the editor, sets up the structure and allows you to request changes in plain language. You can publish a working site using the platform's free URL without paying for a custom domain or hosting, all without seeing or touching raw code. Both platforms offer free limited plans. However, the cost of that convenience is lower visibility into how the system actually works. And if you're a perfectionist like I am, you might end up investing hours to tweak your prompts and correct the code to get it to do what you want -- or not, because in my case, I ran out of free tokens. Most platforms provide a free public URL, so you don't need a paid domain or hosting unless you want a custom domain. You can also add a domain you already own. From there, if you want it to feel like a real mobile app, the easiest way for both iOS and Android is to turn it into a progressive web app by opening it in your phone's browser and tapping Add to Home Screen. It takes 10 seconds, costs nothing and needs no approval. Getting it into the actual app stores is different. iOS is hard for beginners because you need a Mac, Apple's Xcode software, an Apple Developer account ($99/year) and manual building and testing. Android is simpler with a one-time $25 Google fee and no Mac required, and you can build and upload directly from Replit or Bolt via Expo in a few clicks and publish your app in hours. The difference between vibe coding, no-code and traditional programming In traditional programming, you have to understand everything you write. And you write every line in languages like JavaScript, Python or C++, build the logic yourself and control the structure of the entire system. You also carry the responsibility for debugging, performance and security. No-code tools like Webflow and Notion let you assemble software through visual interfaces instead of code. They're useful for websites, small Customer Relationship Management systems and internal dashboards, but they limit you to whatever structures the platform supports. Technically you're building software but only inside predefined templates. With vibe coding, you focus on the outcome not the implementation. Instead of typing code or dragging components around, you describe what you want in plain language and the AI generates the framework, interface and behavior. Here is an example of a website I started building with just a few prompts using Replit: What you can build with vibe coding Developers use vibe coding to generate prototypes and replace repetitive work. Beginners use it to build things they would never attempt with traditional programming, like a recipe organizer, to-do list, microblog, budgeting tool or basic notes app. Dhar said the real constraint isn't what AI can generate but rather what humans can realistically review. He advises keeping vibe-coded projects "small and controlled" so someone with enough experience can inspect every decision before it ships. Some people try simple games, browser extensions and quick utilities for cleaning up files, but a few of these come with caveats. A browser extension, for example, still has to be loaded through the browser's settings, so someone with no technical background may need guidance even if the AI tool generates all the code. Once you get the basics down, AI can solve some of these problems for you. But not all hoped for benefits come to fruition as you can see in my attempt to create an X post refiner in the image below. It took me several hours of back-and-forth prompting only to get it to work in the Gemini Canvas but not as an HTML file. The limits and risks Vibe coding works best for prototypes, throwaway projects, personal tools and experiments. Because beginners often don't understand the generated logic, errors and security issues can be hard to spot. Some projects become difficult to maintain because the AI mixes patterns or creates code that is technically correct but hard to read. Vibe-coding tools rely on LLMs, so they can also hallucinate code the same way chatbots hallucinate answers. That's manageable in a small side project but far more serious in apps that handle user data, require strict security controls or support many users at once. We are not yet ready to vibe-code our way into production-grade systems. Anything that needs long-term stability or strong security still requires real engineering not vibe coding. A vibe-coded app may look polished on the surface, but hidden bugs often show up only after you use it more. Why vibe coding took off People who couldn't code before can now build simple apps. Developers who normally spend hours writing every code line can now save time by just describing what they need. Low-code showed what building software with less code could look like, then AI said, "Hold my beer." If you can articulate an idea, you can build the first version of it. If you can't, AI will even help you create a vibe prompt to generate the code. It becomes the builder, bridging the gap between intention and implementation. Programming has long been considered an elite skill, and AI is reshaping it just as it's reshaping many other jobs. But skilled developers don't have to run for their money just yet because they're the ones who can identify issues and correct them when the AI gets things wrong. "Maybe we might not need as many programmers to do the same amount of work as we used to, but that still requires a lot of skill and a lot of experience to be able to evaluate whatever you're producing," Dhar says. "AI is... never going to be able to replace humans because there has to be accountability." Still, it is now far easier for anyone to take a swing at building something, even without a technical background, and that alone is a big change.
[2]
Vibe coding will bring a wonderful proliferation of software
They're now good enough to do things well, if you take the time to learn how to steer them Opinion For most of the last year, the phrase 'vibe coding' seemed more punchline than possibility. That outlook altered significantly over the last month after step-changes in quality mean vibe coding tools now generate code that's good enough to rewrite expectations about how IT will operate before the end of this decade. Until early November, none of the passes I've had at vibe coding prior yielded any results worth pursuing. Then I put Anthropic's Claude Code to work (courtesy of $300 in credit from Anthropic) transforming a prototyped Python data analysis project into a full-featured, professional-class tool. It took a few hours (and a fair bit of care and tending), leaving me with the impression that vibe coding had turned a corner into usefulness. When Google dropped its latest Gemini 3 Pro model at the end of November, they left another gift under the tree: Antigravity, an integrated IDE and agentic coding tool that draws upon Google's recent acquihires from AI coding startup Windsurf. I fired up Antigravity with a thought about a bit of code I'd love to have - if I could have it with minimal effort. The answer came almost immediately: A VRML 1.0 browser. VRML 1.0 came out in mid-1994, only to be quickly supplanted by the far superior and still reasonably-well supported standards VRML 2.0, VRML97, and X3D. That left a fair bit of early content stranded. VRML 1.0 translation tools have been lost in the mists of time. There's no way to load a VRML 1.0 world and view it. So I fed Antigravity a copy of the VRML 1.0 spec, with the request that it create a VRML 1.0 browser for macOS, written in the Swift programming language, and using Apple's Metal 3D renderer. I didn't give it much more than that - but I did ask if it had any questions. It had a few, and then set to work. Over the course of a Sunday afternoon, and Monday and Tuesday evenings, Antigravity churned out code, which I then plugged into Xcode, compiled, and ran. While Antigravity worked, I functioned as its quality assurance assistant, because while Google's tool can click its way around Chrome, if you're doing pure web work, it can't (yet) operate the macOS GUI. When compiles failed, I pasted those errors into Antigravity, which corrected its mistakes. When it built, I launched it, took screenshots, and fed those back in, with comments about the accuracy of the app's rendering. By the end of Sunday night it was all working - sort of. Antigravity seemed quite pleased with itself, having plowed through the project checklist that it created at the outset and cleverly uses to keep itself on track when working on a large project. Yet you could have driven a planet between that working version and a fully compliant VRML 1.0 browser. Antigravity had done just enough work to render very simple VRML 1.0 files but had left vast portions of the specification unimplemented. When I pointed that out, it drafted a new plan, and we spent Monday evening methodically implementing the features it didn't code the first time. On Tuesday evening, I asked it to generate a conformance test for the app - which the app promptly failed. I fed that output back into Antigravity as a bug list. That's when I had my penny-drop moment. Antigravity (and Claude Code) are quite powerful, but naïve. They don't know what they don't know. Unless a human gets in the agent harness with them, keeping them focused and aware, they tend to just sputter out. It felt as though I drove one of those massive dump trucks they use in Australia's mines: incredible power, that can easily tumble into a mineshaft - unless the human at the wheel remains continuously vigilant. Skill at "steering" coding assistants may soon be the quality most sought after in software engineers, systems administrators and the like. It's a balance between a light touch and a firm focus. After roughly eight hours of "steering" Antigravity, I had a VRML 1.0 browser for macOS - you can build it yourself from source. It might not be 100 percent compliant with the spec - that'll take a few more hours of testing. Yet it's already "good enough" to display all the examples from my 1995 book on VRML, and other bits of content I found online. To go from a spec and a goal to an app in eight hours feels like success. It would have taken me ten times longer to code this myself - and that's if I knew how to code in either Swift or Metal. I wrote a few VRML 1.0 browsers thirty years ago, so I came to this with deep domain knowledge. If I'd asked Antigravity to create a fault-tolerant database that would preserve transaction history across power failures, I'd have made a muddle of things because I don't know enough in that domain to steer the machinery. That means domain experts in software engineering and operations aren't going anywhere - but they will be getting a lot more productive. Beyond these obvious wins, it's now possible to forecast a time - before the end of the decade - when these tools have been sufficiently "softened" to allow pretty much anyone within an organization to rapidly develop an app for a specific use case. That's an interesting bit of road directly in front of us: a Cambrian explosion of weird user-specific apps, designed by users for themselves. These tools make the best better, giving everyone else the benefits of write-once, run-once disposable software. Vibe coding looks to be the gift that keeps on giving. Happy holidays! ®
[3]
The future of coding has a vibe problem: balancing creativity with control
Recently awarded the 2025 word of the year by Collins Dictionary, vibe-coding has caught on as the latest AI trend. Many organizations are jumping on the bandwagon and proposing their own vibe led workplace processes, from vibe-working and vibe-marketing to vibe-automation, with varying degrees of success. "Vibe" is used increasingly to refer to processes in the workplace that are based on intuition and put creativity first. This is made possible for developers through the power of AI tools; with vibe-coding, developers are guided by ideas and impulses, and the support of language models means code emerges almost incidentally. It's less about structured programming as technical complexity fades into the background. However, what is lost when quality and security controls are no longer prioritized? Also, if vibe-coding becomes a mainstay in software development, the requirements of developers will fundamentally change. With AI and developers collaborating side-by-side, the future of coding is set for a profound shift. With AI taking on the technical burden, vibe-coding opens new software development opportunities for people without in-depth programming knowledge. Besides the benefits of speed and simplicity from AI that we are all familiar with by now, vibe-coding democratizes programming by reducing the barrier to entry. Minimal manual coding means less-technical users can simply prompt an AI assistant, meanwhile more experienced developers can spend their time fine-tuning and experimenting. Vibe-coding means the time-intensive task of writing lines of code is replaced with time to brainstorm, experiment and prioritize creativity. The developer therefore takes on a more creative role and has the opportunity to lead innovation for the company. What's more, ideas can be implemented more quickly with vibe-coding so the organization will benefit from an accelerated development process. However, while vibe-coding was designed to deliver code quickly it was not designed with long-term security, maintenance and scalability in mind. In fact, studies show that vibe-coding can lead to increased complexity and re-work down the track or, in the worst-case scenario, low-quality "workslop". So, time saved now creates inefficiencies later. This is especially true when developers implement the AI-generated code without review. However, as AI greatly increases the volume of output at speed, manually auditing that amount of code becomes a daunting or perhaps even infeasible task. Without human review mistakes fall through the gaps, and mistakes within lines of code can be taken advantage of by hackers. For example, attackers can exploit a typo in a package name to inject manipulated libraries. Then, if undetected, the user is unknowingly incorporating malicious code into company systems. Because in the vibe-coding environment dependencies are often managed automatically, the attack surface of an enterprise increases significantly. A study by OutSystems found 62% of IT professionals using AI in the development process experience growing challenges with security and governance. Because AI-generated code can introduce errors and new security vulnerabilities, vibe-coding requires developers to be more stringent than ever before. These risks don't just threaten product quality; they're redefining what it means to be a developer. The more automated code generation becomes, the more quality assurance becomes a key responsibility of the developer. Roles within software development shift as operational implementation is left to AI and developers spend more time on quality assurance and the overarching structuring. This requires a nuanced understanding of how to work with and prompt generative AI, as well as an awareness of its limitations. If vibe-coding is the chosen approach, developers must not rely blindly on AI but rather maintain their human authority to strategically leverage its strengths and compensate for its weaknesses. The skills required of a software developer will also change due to vibe-coding. Take AI prompting as an example; it becomes imperative that the developer knows how to shape their input in a way that yields stable and reproducible results. It's also important to consider the ethical responsibility that falls on the developer when acting on behalf of AI. Who deals with the consequences if AI-generated code violates license terms? Or infringes intellectual property if it has been trained with publicly accessible source codes? What's more, boundaries between software development roles are blurring, developers are increasingly taking on tasks that previously fell under product own The developer is shifting from execution to oversight, becoming an AI strategist who ensures the AI-generated code remains as ethical and secure as when designed by humans. Vibe-coding undoubtedly allows for creative freedom and faster output; however, this value is only felt long-term if supported by clear quality standards, technical expertise and an awareness of the limitations of generative AI systems. Low-code platforms are the perfect complement as they create structured environments for the secure integration of vibe-coding. Low-code platforms guarantee that governance, scalability and long-term maintenance are prioritized by design. Then vibe-coding can generate creative freedom within those parameters. Companies pursuing this hybrid approach can leverage the full potential of vibe-coding without losing control. The clearly defined processes of an AI-powered low-code platform provide the necessary guardrails to vibe-coding to ensure long-term reliability. Vibe-coding in its current iteration is not the future of software development. The next era of development blends AI-driven agility with the structural foundations enterprise needs. Developers evolve from routine coders to AI facilitators and humans and machines work side-by-side to innovate at scale with confidence. We've featured the best AI website builder.
Share
Share
Copy Link
Collins Dictionary crowned vibe coding its 2025 Word of the Year, marking a shift in software development where AI tools generate code from simple text prompts. While the approach democratizes programming and accelerates development, it raises critical questions about code quality, security vulnerabilities, and the evolving role of developers who must now balance creative freedom with rigorous oversight.
Vibe coding has rapidly evolved from an experimental concept to a mainstream approach that's reshaping software development. The term, coined by Andrej Karpathy, an AI researcher and Tesla's former director of AI who was also part of OpenAI's founding team, describes a workflow where developers prompt an AI model with natural language and receive functional code in return
1
. The phrase spread so quickly across developer circles that Collins Dictionary named it Word of the Year for 20251
. This recognition signals more than linguistic novelty—it marks a fundamental shift in how software gets created.
Source: CNET
The impact is already visible in startup ecosystems. Y Combinator data from the Winter 2025 batch reveals that approximately 25% of startups had codebases built almost entirely by AI
1
. This represents a dramatic acceleration in AI adoption for core development work, moving beyond simple code suggestions to full application generation.Vibe coding transforms software development into a conversational process. Instead of manually typing functions or arranging files, developers describe their vision using natural language prompts
1
. Someone might request a skincare blog with a homepage, articles page, and basic editor for adding posts. The AI coding assistants then generate the framework, logic, and user interface. This intention-driven development approach lets creators focus on ideas while AI handles implementation details.Major AI tools supporting this workflow include ChatGPT, Claude, Gemini, Grok, Cursor, and GitHub Copilot Workspace
1
. Platforms like Bolt and Replit further simplify the process by generating entire projects inside integrated editors, allowing users to request changes in plain language without touching raw code1
.
Source: TechRadar
Real-world applications demonstrate the power and limitations of these systems. One developer used Google's Gemini 3 Pro model with Antigravity, an integrated IDE and agentic coding tool, to build a VRML 1.0 browser for macOS in roughly eight hours
2
. The developer estimated it would have taken ten times longer to code manually, especially without prior knowledge of Swift or Metal programming languages.The democratization of programming stands as one of vibe coding's most significant contributions. With AI taking on the technical burden, people without in-depth programming knowledge can now create functional applications
3
. This accessibility reduces barriers to entry while allowing experienced developers to spend time fine-tuning and experimenting rather than writing repetitive code.However, the democratization comes with caveats. Sam Dhar, former engineering leader at Adobe and Amazon Alexa who now leads AI Platform at Galileo AI, emphasizes that someone must evaluate AI-generated code and understand what was produced to make informed decisions
1
. According to Dhar, only those with knowledge and experience can effectively use AI to build production-ready applications. He describes real software as a pyramid of decisions, from tiny UI choices like button color to high-level questions about target users and scalability.While vibe coding delivers speed and simplicity, it wasn't designed with long-term security, maintenance, and scalability in mind
3
. Studies indicate that creative software development through AI can lead to increased complexity and rework, or in worst cases, low-quality output. When developers implement AI-generated code without review, mistakes slip through, creating openings for attackers to exploit.A study by OutSystems found that 62% of IT professionals using AI in the development process experience growing challenges with security and governance
3
. Attackers can exploit typos in package names to inject manipulated libraries, and if undetected, users unknowingly incorporate malicious code into company systems. Because dependencies are often managed automatically in vibe coding environments, the attack surface of enterprises increases significantly.
Source: The Register
Related Stories
The more automated code generation becomes, the more quality assurance emerges as a key responsibility for developers
3
. Operational implementation shifts to AI while developers spend more time on quality assurance and overarching structure. This requires nuanced understanding of how to work with and prompt generative AI, along with awareness of its limitations.The experience of building complex applications reveals that AI coding assistants like Antigravity and Claude are quite powerful but naïve—they don't know what they don't know
2
. Unless a human gets in the harness with them, keeping them focused and aware, they tend to sputter out. Skill at "steering" coding assistants may soon be the quality most sought after in software engineers and systems administrators, requiring balance between a light touch and firm focus.Prompting becomes imperative, as developers must shape their input to yield stable and reproducible results
3
. Ethical responsibilities also fall on developers acting on behalf of AI, including questions about who deals with consequences if AI-generated code violates license terms or infringes intellectual property from publicly accessible source codes used in training.Domain expertise continues to play an essential role. One developer noted that deep domain knowledge from writing VRML browsers thirty years ago enabled effective steering of Antigravity
2
. Without that background, attempting to create a fault-tolerant database would have resulted in a muddle. This means domain experts in software engineering and operations aren't going anywhere, but they will become significantly more productive.The developer is shifting from execution to oversight, becoming an AI strategist who ensures AI-generated code remains as ethical and secure as when designed by humans
3
. Boundaries between software development roles are blurring as developers increasingly take on tasks that previously fell under product ownership. The value of creativity and faster output through vibe coding is only felt long-term if supported by clear quality controls and rigorous human oversight to compensate for AI weaknesses while leveraging its strengths.Summarized by
Navi
[2]
12 Sept 2025•Technology

04 Sept 2025•Technology

21 Aug 2025•Technology

1
Business and Economy

2
Technology

3
Technology
