Curated by THEOUTPOST
On Sat, 31 Aug, 8:02 AM UTC
2 Sources
[1]
Big Tech's Big Gamble with AI: Are They Overextending?
Large tech firms are spending billions on AI, but the promises haven't materialized. Microsoft, Google, Amazon and other digital behemoths are investing millions on artificial intelligence with the goal of revolutionizing markets and generating new business opportunities. The real-world benefits, however, have fallen short of expectations. According to The Hill, recent statistics indicate that although $50 billion has been invested in AI processors alone, huge AI models have only generated roughly $3 billion in income. To create responses and complete tasks, these huge models -- like ChatGPT from OpenAI -- use enormous volumes of data. However, in spite of their potential, they frequently fall short of providing the useful advantages that businesses had hoped for. Smaller, more focused AI models, on the other hand, are working better. These models employ targeted data and task-specific attention to produce faster and more accurate results. They are more affordable and provide higher returns on investment, claims ProMarket.org. Regulators are now closely monitoring Big Tech's partnerships with AI companies. Partnerships such as Microsoft's agreement with OpenAI are being looked into by the Federal Trade Commission (FTC). Writing for ProMarket.org, John B. Kirkwood raises worries about how these partnerships can lessen competition and impede innovation. The FTC and other regulatory bodies are concerned that these partnerships might lead to higher prices and less competition. They are carefully examining these deals to ensure they don't harm the market. As AI technology continues to evolve, the focus may shift from ambitious goals to more practical applications. The gap between the high expectations and the actual performance of large AI models is becoming clearer. The industry might need to rethink its approach to better meet real-world needs and comply with new regulations.
[2]
Did Big Tech bite off more than it can chew with AI?
It can be tough, of course, when the world's five largest companies by market capitalization are each investing billions of dollars into their own AI solutions; when venture capitalists are telling us that AI will displace half the world's jobs by 2027; and when even elementary schools are struggling to create AI policies. But AI isn't coming for your job this year, and it's not going to replace humanity. The truth is, massive, generative AI programs that train on huge swaths of the internet have a great future behind them. Although their possibilities are tantalizing, AI reality has fallen short. Large AI programs have yet to consistently provide practical value to the businesses spending billions of dollars on them. And the delta between business value and AI spending will only get worse. Companies have had much greater return on investment with smaller AI models that are designed for specific purposes. Such tailored models -- as opposed to the unstructured large language models like OpenAI's ChatGPT -- provide immediate and significant return on investment and can thrive in a regulated environment. By way of background, large language models scrape information from across the internet and ingest terabytes of data to produce the outputs we're all familiar with, from AI-generated art and op-eds (not this one, I promise) to running consumer sentiment analyses and optimizing SEO. By ingesting such huge amounts of data, they can identify and repeat patterns. Smaller models, on the other hand, detect patterns from far less data, all of which it takes from areas relevant to a given field. So a model designed to make stock trades may pull data from the Fed, business news sites, and a number of other relevant areas, but ignore countless irrelevant items online. It would not pull data from TMZ, the complete works of Shakespeare or a TikTok dance craze. Such small models appeal to companies, first and foremost, because they provide immediate, tangible return on investment. Large language models do not. According to Sequoia Capital, the AI industry spent $50 billion on chips from NVIDIA alone -- and that's not even accounting for AI's other astronomical expenses, such as top-flight talent, intensive programs to help companies leverage complex technologies and high financing costs. Even Mark Zuckerberg admits the industry is spending too much. Yet large language models have only brought in $3 billion in revenue, largely because they're not good at math; much to the chagrin of humanities majors, math is still very important in business. But sadly, with these technologies, 2 plus 2 doesn't always equal 4. Compare that to the immediate ROI of smaller models. Companies can implement systems for six instead of eight figures and begin using them almost as soon as the check goes through. Some companies want to change the world and harness the next big thing, of course. But, ultimately, these are profit-driven businesses. When they find themselves spending seven and eight figures on "pilot" programs that fail to improve the bottom line, they're going to make changes. It's not just money that could slam the breaks on large language models; it's the coming wave of regulation. For good or ill, AI is going to face more robust regulation. Given how fast the technology is moving and the relatively limited understanding of AI on Capitol Hill, that regulation is going to be blunt. It will center on simple concepts that members of Congress can understand and, given the end of Chevron Deference, rules that regulatory agencies can enforce. Most likely, regulation will require that AI products be both repeatable (i.e., if you enter the same prompt, you will get the same result each time) and explainable (i.e., we can determine how the program came to the conclusion it did if we need to). Quite fairly, regulators will be uncomfortable with AI's tendency to "hallucinate" or with programs that cannot explain how or why they came to the decision they did. The vast majority of large language models are neither repeatable nor explainable, and they are unlikely to become so in the near future. They simply have too much data and use too much processing power for us to know where their information is coming from or how the AI came to the conclusions it did. Smaller models do not have those same limitations. It is a simple forensic exercise to understand their conclusions. Given that it is combing through far smaller data sets, its conclusions will not vary. This isn't to say large language models are useless or that the AI revolution is overblown. They aren't, and it isn't. There are use cases for large language models and, as everyone can attest, from the finance industry to the agriculture industry, AI is already valuable. But the AI industry has to grow up. It can no longer focus on "limitless possibilities" and ignore the associated costs. Like a daydreaming college student who has just graduated, it's time for AI to stop thinking about solving all of the world's problems and start thinking about getting a job. AI tailored to specific needs is AI's foreseeable future. And that is a good thing. Let's take a deep breath and enjoy it.
Share
Share
Copy Link
As major tech companies invest heavily in AI, questions arise about sustainability and the potential of smaller, specialized models. This story explores the current AI landscape, its challenges, and emerging alternatives.
In recent months, tech giants have been pouring unprecedented amounts of resources into artificial intelligence (AI) development. Companies like Microsoft, Google, and Meta are betting big on AI, with investments reaching into the billions of dollars. Microsoft, for instance, has committed a staggering $13 billion to OpenAI, the creator of ChatGPT 1.
This AI arms race has led to rapid advancements in large language models (LLMs) and generative AI capabilities. However, the sustainability of this approach is increasingly being questioned by industry experts and analysts.
The massive investments in AI have raised eyebrows among investors and industry watchers. There are growing concerns about the return on investment (ROI) and the long-term viability of these expensive AI projects. As noted by Dan Ives, managing director at Wedbush Securities, "The biggest risk is that this becomes a money pit that has little to no ROI over the coming years" 1.
Moreover, the environmental impact of training and running these large AI models is significant. The energy consumption and carbon footprint associated with AI development have become points of contention in the tech industry's push for innovation.
As the limitations and challenges of large-scale AI models become more apparent, attention is shifting towards smaller, more specialized AI solutions. These tailored models offer several advantages over their larger counterparts:
Efficiency: Smaller models require less computational power and energy to run, making them more cost-effective and environmentally friendly.
Specialization: These models can be designed for specific tasks or industries, potentially offering better performance in niche applications.
Privacy and Security: Smaller models can be run on-device or on local servers, reducing data privacy concerns associated with cloud-based large language models 2.
The potential for specialized AI models extends across various sectors. In healthcare, for example, AI models tailored to specific medical specialties could assist in diagnosis and treatment planning without the need for extensive patient data to be shared with large tech companies.
Similarly, in the legal field, AI models designed to understand complex legal language and precedents could revolutionize legal research and contract analysis while maintaining client confidentiality 2.
As the AI landscape continues to evolve, it's becoming clear that a one-size-fits-all approach may not be the most effective strategy. While large language models have demonstrated impressive capabilities, the future of AI likely lies in a combination of general-purpose and specialized models.
Tech companies and investors are now faced with the challenge of finding the right balance between pushing the boundaries of AI technology and developing practical, sustainable solutions that can deliver tangible benefits to businesses and society at large.
Reference
[1]
As tech giants pour billions into AI development, investors and analysts are questioning the return on investment. The AI hype faces a reality check as companies struggle to monetize their AI ventures.
5 Sources
5 Sources
As AI enthusiasm soars, investors and analysts draw parallels to the dotcom bubble. While AI shows promise, concerns about inflated expectations and potential market corrections are growing.
2 Sources
2 Sources
Major tech companies plan to invest over $320 billion in AI infrastructure for 2025, despite market skepticism and the emergence of efficient alternatives like DeepSeek.
18 Sources
18 Sources
Recent reports suggest that the rapid advancements in AI, particularly in large language models, may be hitting a plateau. Industry insiders and experts are noting diminishing returns despite massive investments in computing power and data.
14 Sources
14 Sources
Major tech companies face investor scrutiny over AI investments as Wall Street demands clearer evidence of profitability. Despite significant AI advancements, the financial returns remain uncertain, leading to mixed market reactions.
5 Sources
5 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved