Curated by THEOUTPOST
On Mon, 5 Aug, 4:02 PM UTC
2 Sources
[1]
Today's challenge: Working around AI's fuzzy returns and questionable accuracy
Unfortunately, there are no clear-cut 'before-and-after' pictures that graphically illustrate the impact or accuracy of AI. It has become difficult to set realistic expectations about artificial intelligence -- and this could ultimately confuse efforts to understand the actual value of AI efforts. As the use of technology increases, it means changes in the career landscape for technology professionals, favoring more creative thinkers. That's the word from Ajay Malik, former head of architecture and engineering of Google's Worldwide Corporate Network, and currently CEO of Secomind.ai, who sees a rocky road ahead in the AI space. Perhaps one of the most challenging aspects of AI at this point is setting realistic expectations, he said in a recent podcast hosted by Thomas Erl, president of Arcitura Education. Also: Photoshop vs. Midjourney vs. DALL-E 3: Only one AI image generator passed my 5 tests For starters, there isn't enough measurement or awareness of the potential gains AI is delivering, Malik said. Decision-makers "want to be sure that all the information that they will use internally, or for interacting with customers, is accurate," he said. "How will companies measure the accuracy of what AI is doing? So AI did something, how do you always know it's accurate? How can you trust it 100%?" This weighs on how well business goals can be achieved through AI, Erl said. "If organizations are not successful or if they stumble, or if they invest in AI systems that end up resulting in loss instead of growth, that may postpone or change the outcome of how AI might impact their workforce. They might think, 'this didn't work out, let's go back to human workers.'" But the opportunity is real and we should prepare ourselves for whatever the impact will be." Unfortunately, there are no clear-cut "before-and-after" pictures that graphically illustrate the impact or accuracy of AI, Malik said. To address this, "they need to design built-in verification, built-in explainability, and built-in checks and balances to see if the AI's answer is correct." This includes "an alternative path, mechanism, model that provides a technique so that they can verify the answer." The key is understanding what exactly the AI system is producing, Malik advised. "Don't use AI as a black box that you depend upon without even thinking. We are not there today." In addition, businesses cannot rely on services such as ChatGPT, as responses need to be accurate and free of hallucinations. Also: AI-powered 'narrative attacks' a growing threat: 3 defense strategies for business leaders Instead, he advises, AI systems should have "checks and balances built in, verifying the answers, verifying the data, and offering explainability. There is a term for it called XAI, or explainable AI." There are also profound implications for technology-oriented career growth, Malik continued. "There is a big resource shift coming," he said. Those employees who use AI will become lot more valuable than the employees who do not use AI." AI's impact will be felt in the types of jobs and roles that will flourish in the months and years to come. "Even in software, even in programming, even in testing, a lot of those jobs will get eliminated -- not today, but over time," Malik predicted. "This is work which the AI can do -- very junior level work or very repetitive redundant level of work." This will especially apply to coder-level jobs, versus higher-level software engineering jobs, he continued. "Coders are just coding based on some known facts, and programming uses more thinking. In my own company, we see 20 to 25 times higher productivity because of using AI for supporting coding, for supporting meetings, meeting minutes, action items they can do a lot more with less people now." Also: Intel sees AI in enterprise on a 'three to five-year path' At the same time, there will be a shift toward "the thinkers, the problem solvers, the people who are creative," Malik added. "AI will take care of the labor, repetitive, or well-defined. But the creative humans will use AI to produce in high velocity and high quality and something really creative. That shift is coming."
[2]
Companies will need to close three gaps -- value, confidence and expertise -- if they want to make AI useful
Company leaders, at least on earnings calls, say they want to use generative AI to boost the bottom line, and Microsoft recently estimated that nearly 60% of Fortune 500 companies are using its Copilot AI assistant. But speakers at Fortune Brainstorm AI Singapore last week warned that companies need to overcome a gaps in value, confidence and expertise if they want to leverage the benefits of employing generative AI. "There's a lot of excitement about AI, but translating that into business outcomes is not easy," said Debanjan Saha, CEO of DataRobot, referring to what he called the "value gap." When it comes to the "confidence gap," businesses are "not confident enough to take those AI applications or models to production because they are not sure about the accuracy," he continued. Fortunately, Saha noted, companies don't need their workforce to have a deep level of expertise with AI and building models. Instead, "you really need people who can use these models to actually solve business problems." Another important element of the "confidence" gap? Figuring out how to "stay out of jail," Saha said. But government scrutiny isn't stopping companies from trying to adopt AI. "Surprisingly, the industries which are more regulated are actually using [AI] a lot, contrary to what you might think," said Vivek Luthra, senior managing director for growth markets and ANZ data at Accenture. (Accenture is a founding partner of Brainstorm AI) Luthra said that companies can approach the gaps in AI from a workforce transformation perspective. Companies need to think ahead, determine what work they will be doing in the future, and then cultivate the workers they'll need to achieve that. Training the right talent could result in huge value for a firm. Luthra cited the example of a food and beverages company, an Accenture client, that used AI to create a year's worth of marketing content in just eight days. "That is phenomenal in terms of productivity enhancement," he said. But workers will need to be trained in more than just generative AI and large language models. "To be able to scale, you need to think about the competencies. Having a large language model is a small part of that," Luthra said. Both speakers warned that not every firm will be able to adopt AI at the same rate. Tech and asset-light companies can be nimbler, while asset-heavy firms like manufacturing may need more time to use the new technologies. Saha reminded attendees that the point of generative AI is to return value to businesses, and that the "honeymoon period" is not going to last long. "Showing near-term value and showing some return on investment, showing some early successes -- I think that's very, very important," Saha said.
Share
Share
Copy Link
As AI technology advances, businesses and users face challenges with accuracy and reliability. Experts suggest ways to address gaps in AI performance and human expertise to maximize AI's potential.
As artificial intelligence (AI) continues to evolve and integrate into various aspects of business and daily life, users are grappling with a significant challenge: the accuracy and reliability of AI-generated outputs. According to recent reports, AI systems often produce "fuzzy returns" and exhibit questionable accuracy, leading to concerns about their practical application in real-world scenarios 1.
One of the key issues identified is the "confidence gap" in AI utilization. Many users and organizations lack the assurance needed to fully trust and implement AI solutions. This hesitation stems from uncertainties about the technology's capabilities and limitations. Experts suggest that bridging this gap requires a concerted effort to educate users and demonstrate AI's potential through practical, real-world applications 2.
Another critical challenge is the "expertise gap" that exists between AI developers and end-users. While AI systems are becoming increasingly sophisticated, there's often a disconnect between those who create the technology and those who are expected to use it in their daily operations. This gap can lead to misunderstandings about AI's capabilities and limitations, potentially resulting in misuse or underutilization of AI tools 2.
To address these challenges, experts recommend several strategies:
Enhanced Training and Education: Providing comprehensive training programs to help users understand AI's capabilities and limitations.
Collaborative Development: Encouraging closer collaboration between AI developers and end-users to ensure that AI solutions are tailored to real-world needs.
Transparency in AI Systems: Implementing measures to make AI decision-making processes more transparent and explainable to build trust among users 1.
Continuous Evaluation: Regularly assessing and refining AI systems to improve accuracy and reliability over time.
Despite advancements in AI technology, human oversight remains crucial. Experts emphasize the importance of maintaining a "human-in-the-loop" approach, where AI assists human decision-making rather than replacing it entirely. This approach can help mitigate risks associated with AI errors and ensure that critical decisions are not left solely to automated systems 12.
As organizations work to address these challenges, the future of AI implementation looks promising. By closing the gaps in confidence, expertise, and accuracy, businesses can unlock the full potential of AI technology. This process will likely involve ongoing collaboration between technologists, business leaders, and policymakers to create a framework that maximizes AI's benefits while minimizing its risks 2.
Generative AI is revolutionizing industries, from executive strategies to consumer products. This story explores its impact on business value, employee productivity, and the challenges in building interactive AI systems.
6 Sources
6 Sources
As software workers show enthusiasm for generative AI in the workplace, businesses are advised to move beyond the hype and focus on practical applications. This story explores the growing excitement around AI tools and the need for strategic implementation.
2 Sources
2 Sources
A comprehensive look at how businesses can effectively implement AI, particularly generative AI, while avoiding common pitfalls and ensuring strategic value.
3 Sources
3 Sources
As businesses move beyond the pilot phase of generative AI, key lessons emerge on successful implementation. CXOs are adopting strategic approaches, while diverse use cases demonstrate tangible business value across industries.
4 Sources
4 Sources
A comprehensive look at the current state of AI adoption in enterprises, covering early successes, ROI challenges, and the growing importance of edge computing in AI deployments.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved