2 Sources
2 Sources
[1]
'Let a thousand flowers bloom': Jensen Huang says demanding ROI from AI is like forcing a child to make a business plan for a hobby | Fortune
There's a number haunting the artificial intelligence (AI) space: 95%. As in, 95% of generative AI pilots are failing, according to MIT's influential, arguably overblown research study in August 2025. When Fortune's Diane Brady spoke with PwC Global Chairman Mohamed Kande roughly six months later, in Davos, Switzerland, that number was stubbornly high: 56% of CEOs surveyed were getting "nothing" from their AI adoption efforts. The solution is peace, love, understanding, and good parenting skills, according to Nvidia CEO Jensen Huang. The $4 trillion market-cap man arrived at the Cisco AI Summit with a message that sounded less like Wall Street rigor and more like a blend of 1960s counterculture and modern parenting: "Let a thousand flowers bloom." Sitting down with Cisco CEO Chuck Robbins, Huang addressed the tension facing enterprise leaders who feel the pressure to adopt AI but fear the lack of immediate, quantifiable results. When Robbins asked for the first steps an enterprise should take, Huang dismissed the immediate fixation on spreadsheets. "I get questions like ... ROI," Huang said. "I wouldn't go there". Instead, he advocated for a philosophy of abundance and messy experimentation, explicitly comparing corporate innovation to raising children. He argued that demanding proof of financial success before allowing an engineer to try a new AI tool is as stifling as asking a child to justify a hobby with a business plan. "I want the same thing for my company that I want for my kids: go explore life," Huang explained. When your kids tell you they want to try something, he added, you should say yes. We never ask questions at home like: what is the return on investment here? How is this going to lead to financial success? How can you prove to me that it's worthwhile? "We never do that at home. But we do it at work." This approach requires executives to relinquish a degree of command that might feel uncomfortable, Huang admitted, arguing that the creativity and innovation make it worthwhile. "The number of different AI projects in our company is, it's out of control and it's great," he said, remarking that innovation doesn't always happen when you're in control. "If you want to be in control, first of all, you've got to seek therapy. But second, it's an illusion. You're not in control. If you want your company to succeed, you can't control it. Huang argued that to succeed, leaders must seek to influence their companies rather than control them. The logic behind letting "a thousand flowers bloom" is risk management through diversification. While this method "makes for a messy garden," he said, it prevents the error of committing resources -- "putting all your wood behind one arrow" -- too early in a technological shift where the "winning" tools are not yet obvious. While advocating for a relaxed approach to ROI, Huang was adamant about the necessity of "tactile understanding." He urged leaders not to rely solely on cloud rentals or finished products. Computers are everywhere these days, he said, but if you built one yourself, you'd still get a better understanding, just like a good car owner wouldn't take Uber all the time but would look closely at their engine. "Lift the hood, change the oil, understand all the components," he said. "Build something. You might discover you're actually insanely good at it. You might discover that you need that skill." He stressed that because AI technology is vital to the future, companies must build some infrastructure on-premise to truly understand how the "components" work. This relates to data privacy and what Huang calls the most valuable intellectual property: the questions. "The most valuable IP to me is not my answers... they're my questions," Huang said, remarking that answers are a commodity but smart questions are irreplaceable. Fortune recently visited KPMG's Lakehouse in Orlando, Florida, where the firm was rolling out its AI training framework, first with interns and then firmwide. "Think, prompt, check" was how they were training employees to work with AI, stressing the first and last points as something not to take for granted. The urgency for this experimentation, according to Huang, stems from a fundamental "reinvention of computing." The industry is moving from "explicit programming" -- writing specific lines of code -- to "implicit programming," where users state their intent, and the AI figures out the solution. In this new world, "typing is a commodity," Huang noted. The true value lies in the domain expertise required to guide the AI. "You now tell the computer what your intent is, and it goes off and figures out how to solve your problem." Huang closed by flipping the popular ethical narrative of "humans in the loop" on its head. The goal, he stated, should be "AI in the loop." By integrating AI into every process, companies can capture the "life experience" of their employees, turning daily work into permanent corporate intellectual property. In other words, they'll be letting a thousand flowers bloom, but only if they have the right curiosity, the right questions, and the right support from above to think freely.
[2]
For God's sake, do it! The AI worldview according to CEO Jensen Huang - out of control innovation, the non-death of software, and leaving Moore's Law behind us in a world of abundance
Recent days have seen a lot of speculation around the relationship between OpenAI and NVIDIA, a relationship that many commentators have long regarded as dangerously co-dependent, at least on the part of the former. To recap, back in September last year it was revealed that NVIDIA was set to invest up to $100 billion in OpenAI, a deal that has yet to fully materialise. Earlier this week the markets were spooked by NVIDIA CEO Jensen Huang's comments to the effect that there wasn't a commitment, rather his firm had been invited to invest that sum in OpenAI. But he also sought to calm nerves by saying that NVIDIA was going to make "a huge investment" in OpenAI which he framed as "one of the most consequential companies of our time". That did settle things down somewhat although an agitated OpenAI CEO Sam Altman himself still took to X to 'shout' at investors, perhaps unhelpfully: I don't get where all this insanity is coming from. So when both men turned up at the same event on Tuesday, that might, you would think, be a great opportunity to ask the obvious question - WTF is going on here? The event was Cisco's AI Summit. Altman and Huang appeared at opposite ends of the day in separate sessions. Neither was asked that question and the subject of the headline-dominating market jitters was never as much as alluded to. File that one under wasted opportunity? What Huang did do, as part of an interesting banter session with Cisco CEO Chuck Robbins, was paint his vision of the transformative power of AI on all organizations and outline what the company of the future - and its workers - might look like. He began with a startling assessment of the number of AI projects underway within NVIDIA itself, declaring: It's out of control - and it's great! A simple case of you can never have too much? Not quite. Huang's thesis runs as follows: Innovation is not always [being] in control. If you want to be in control, first of all, you ought to seek therapy. But second, it's an illusion. You're not in control. If you want your company to succeed, you can't control it. You want to influence it, [but] you can't control it. Cynics might not find that particularly reassuring given the worries about the AI bubble bursting and taking the rest the global economy with it, but Huang has more to add: Too many people want, too many companies want. explicit, specific, demonstrable ROI. Showing the value of something worth doing in the beginning is hard...Let a thousand flowers bloom, let people experiment safely. We're experimenting with all kinds of stuff in the company. We use Anthropic. We use Codex. We use Gemini. We use everything. And when one of our groups says, 'I'm interested in using this AI', my first answer is, 'Yes', and then I'll ask , 'Why?'. Instead of 'Why, then yes', I say, 'Yes, then why?'. The reason for that is because I want the same thing for my company that I want for my kids - go explore life! [The kids] say they want to try something, the answer is, 'Yes', and then I say, 'How come?'. You don't go, 'Prove it to me. Prove to me that doing this very thing is going to lead to financial success or some happiness someday. Prove to me, and until you prove it to me, I'm not going to let you do it'. We never do that at home, but we do do it at work. Of course the danger with letting a thousand flowers bloom is that you end up with too many plants and a lot of weeds surely? Huang concurs: At some point, you have to use your own judgment to figure out when to start curating the garden because a thousand flowers in bloom makes for a messy garden. At some point, you have to start curating to find what's the best approach or what's the best platform so that you can put all your wood behind one arrow. But you don't want to put all your wood behind one arrow too soon. The basic message is that the old rules are just that - old and out of date. You need to think today in terms of what Huang calls "the abundance of intelligence" and how things have changed by orders of magnitude: That's another way of saying what used to take a year could take a day now. What used to take a year, it could take an hour. It could be done in real time. And the reason for that is because we are in the world of abundance. Moore's Law, goodness gracious, that was slow. That's like snails [pace]. Remember, Moore's Law was two times every 18 months, 10 times every five years, 100 times every 10. But where are we now? One million times every 10 years. When I think about engineering, when I think about the problem these days, I just assume my technology, my tool, my instrument, my spaceship, is infinitely fast. How long is it going to take for me to go to New York? [Let's say] I'll be there in a second. What would I do different if I can get to New York in a second? What would I do different if something that used to take a year now takes real time. What would I do different if something used to weigh a lot and now it's just anti-gravity. When you approach everything with that attitude, you are applying AI sensibility. And don't stop thinking big, he urges, take everything to extremes: The definition of abundance is you look at a problem so big, and you say, 'You know what, I'll do it all. I'm going to cure every field of disease. I'm not going to just do cancer. Are you kidding me? That's insane. We'll just do all of human suffering!'. That's abundance. If you're not applying that sensibility, you're doing it wrong, insists Huang: . Imagine you apply that logic, that sensibility, to the hardest problems in your company. That's how you're going to move the needle - and that's how they all think now. If you're not thinking that way, just imagine your competitor is thinking that way. If you're not thinking that way, just imagine a company who is about to get founded is thinking that way. It changes everything. That includes traditional notions of the role and function of software, although it was good to hear that Huang is no fan of the Altman-backed theory that AI spells the death of SaaS companies as we know them: There's this notion that the software industry is in decline, and will be replaced by AI. You can tell because there's a whole bunch of software companies whose stock prices are under a lot of pressure because somehow AI is going to replace them. It is the most illogical thing in the world, and time will prove itself... If you were an Artificial General Intelligence, would you use tools like ServiceNow and SAP and Cadence and Synopsys? Or would you re-invent a calculator? Of course, you would just use a calculator. But software is different now, he added: It's contextual, and every context is different. Every time everybody uses the software is different and every prompt is different. The precursor you give it, the priors you give it, the context is different....In the future, everything is going to be generative. It's happening right now. This conversation has never happened before. The concepts existed before, the priors existed before, but every single word in the sequence has never happened before. And the reason for that is, obviously, we're four wines in... Hmmmmm....that's the kind of joke that could send Wall Street into an unhelpful tizzy these days, but plowing on.... Huang also makes the bold claim that AI can transform any company into a tech company and is taking no prisoners in the examples he uses to back this up: I love Disney, and I love working with Disney; I'm pretty sure they'd rather be Netflix. I love Mercedes, [but] I am certain they'd rather be Tesla. I love Walmart; I am certain they'd rather be Amazon. I believe that we have an opportunity to help transform every single company into a technology company. Technology first, technology first. Technology is your superpower and the domain is your application versus the other way, which is the domain is who you are, and you're seeking for technology. But what about the role of human beings in this brave new world Huang envisages? Here he has no time for naysayers and doom-mongers: The most important part of AI is applications...all the layer underneath is just infrastructure stuff. What you need to do is apply the technology. For God's sake, apply the technology. A company that uses AI will not be in peril. You're not going to lose your job to AI, you're going to lose your job to someone who uses AI, so get to it. That's the most important thing. Smart people do smart things, he argues, and in the future domain expertise and knowledge will be humans main asset: For the first time, [humans] can explain exactly what you want to a computer in your language. Just tell it what you want. Tell it what you mean, and the computer will write the code, because coding, as it turns out, is just typing and typing, as it turns out, is a commodity. That's the great opportunity for [everyone]. [Everyone] could be levitated above the atomic limitations that you were limited by before. [Everyone] could escape from this limitation, which is we don't have enough software engineers because, as it turns out, typing is a commodity. And [everyone has] something of great value, which is domain expertise to understand the customer, understand the problem. That is the ultimate value, to understand the intent. But for all that, you should try to understand technology, he urges: I would advise you to do exactly the same thing I'd advise my children - build a computer. Even though the PC is everywhere, even though it's mature, even though the technology is developed, for God's sake, build one. Know why all the components exist. If you were to be in the world of the automobile industry [or the] transportation industry, don't just use Uber. For God's sake, lift a hood change the oil, understand all the components. For God's sake, understand how it works. It is vital. This technology is so important to the future. You must have some tactical understanding of it. I was with Huang through most of his pitch until he declared: There is an idea that AI should always have a human in the loop. It's exactly the wrong idea. But he backed this up by going on: Every company should have AI in the loop. The reason for that is because we want our company to be better and more valuable and more knowledgeable every single day. We never want to go backwards. We never want to go flat. We never want to start from beginning. If we have AI in the loop, it will capture our life experience. Every single employee in the future will have AI, lots of AIs in the loop, and those AIs will become the company's intellectual property. That's the future company. I can see what he's saying, although I'm hugely uncomfortable with his initial wording which offers too much of a green light to those wrong-thinking Chief Execs who are still looking to AI right now as a cost-cutting tool, a short-termist way to slash back the human workforce. Anyway, an interesting session that met the brief that Robbins had set out at the start of the day that this wasn't a Summit for lots of product pitches. That wasn't a promise met by absolutely all the speakers by any manner of means, but it's fair to say that Huang understood the task at hand and delivered some thought-provoking content to take away. If only they'd asked him that bloody OpenAI question!!!
Share
Share
Copy Link
Nvidia CEO Jensen Huang challenged conventional business wisdom at the Cisco AI Summit, urging leaders to abandon immediate ROI expectations for AI projects. Comparing corporate innovation to parenting, he advocated for messy experimentation over rigid control, arguing that demanding proof of financial success before trying new AI tools stifles creativity just like asking a child to justify a hobby with a business plan.
Nvidia CEO Jensen Huang delivered a provocative message to enterprise leaders struggling with artificial intelligence (AI) adoption: stop obsessing over immediate returns and embrace chaotic experimentation. Speaking at the Cisco AI Summit with Cisco CEO Chuck Robbins, Jensen Huang dismissed the conventional fixation on return on investment (ROI), declaring "I wouldn't go there" when asked about demonstrating early financial success from AI innovation
1
.
Source: diginomica
This philosophy arrives as companies grapple with disappointing AI adoption results. MIT research from August 2025 found 95% of generative AI pilots were failing, while PwC Global Chairman Mohamed Kande reported that 56% of CEOs surveyed were getting "nothing" from their efforts
1
. Rather than demanding proof before investment, Huang advocates what he calls "let a thousand flowers bloom"—an approach that prioritizes abundance and messy experimentation over spreadsheet rigor.The Nvidia CEO Jensen Huang drew an unexpected parallel between raising children and managing AI projects. "I want the same thing for my company that I want for my kids: go explore life," he explained, noting that parents never demand business plans from children exploring hobbies
1
. This philosophy extends throughout Nvidia, where the number of AI projects has become what Huang cheerfully describes as "out of control"2
.
Source: Fortune
"Innovation is not always [being] in control," Huang stated at the Cisco AI Summit. "If you want to be in control, first of all, you ought to seek therapy. But second, it's an illusion. You're not in control"
2
. He emphasized that leaders must seek to influence their companies rather than control them, allowing employees to experiment with various platforms including Anthropic, Codex, and Gemini. When teams request to try new AI tools, Huang says "Yes" first, then asks "Why?"—reversing the traditional approval process.While advocating for experimentation, Huang acknowledged the strategic timing required. "Let a thousand flowers bloom" serves as risk management through diversification, preventing companies from "putting all your wood behind one arrow" too early when winning tools remain unclear
1
. Though this creates "a messy garden," it allows organizations to discover what works before committing significant resources2
.The abundance of intelligence fundamentally changes what's possible, according to Huang. He contrasted current AI capabilities with Moore's Law, which delivered 100 times improvement every decade. "Where are we now? One million times every 10 years," he noted, describing Moore's Law as "snails [pace]" compared to modern acceleration
2
. This shift requires leaders to reimagine constraints: "What would I do different if something that used to take a year now takes real time?"Related Stories
Despite his relaxed stance on ROI, Huang stressed the necessity of hands-on learning. He urged leaders not to rely solely on cloud rentals, comparing it to understanding cars: "Lift the hood, change the oil, understand all the components"
1
. Companies must build some infrastructure on-premise to grasp how AI components work, particularly regarding data privacy and intellectual property.Huang identified questions—not answers—as the most valuable IP. "The most valuable IP to me is not my answers... they're my questions," he explained, noting that answers become commodities while smart questions remain irreplaceable
1
. This emphasis on domain expertise reflects the reinvention of computing from explicit programming to what Huang calls "implicit programming," where users state intent and AI determines solutions. In this paradigm, "typing is a commodity" while the true value lies in the expertise required to guide AI systems1
.Firms like KPMG are already implementing structured AI training frameworks, teaching employees to "think, prompt, check" when working with AI tools
1
. As companies navigate this transition, Huang's message suggests success depends less on controlling outcomes and more on creating environments where employees can safely explore AI's possibilities across countless projects.Summarized by
Navi
25 Nov 2025•Business and Economy

29 Aug 2024

12 Jan 2026•Technology

1
Business and Economy

2
Policy and Regulation

3
Policy and Regulation
