Curated by THEOUTPOST
On Tue, 16 Jul, 12:02 AM UTC
4 Sources
[1]
From Pilot To Production in Generative AI: 3 Lessons Learned
We have come a long way with Generative AI. A year ago I hosted a Stanford Computer Science Professor Emeritus in a meeting with a think tank I run - the Executive Technology Board with some 100 plus F1000 global corporation chief technology executives. That conversation is posted here: Yoav Shoham on LinkedIn. From those early days of Generative AI, and reflecting back, what's been really interesting is to see the discussion evolve from "what could we do with Generative AI" to "what should we do with Generative AI". This is not just a play of words. It's that after a year of experimentation, incubation, POCs, and pilots, the discussion has changed to - where are we going to reliably get return on capital; and what have we learned from the bumps on the road along the way. There are three areas we are the wiser for, today. 1. Build the Right Data Foundation Without a proper data foundation, we can't even get started with AI. Data is the food for AI, and this means having access to relevant, high-quality, clean and well-governed data is key. But data is the last unsolved enterprise problem. And data is no longer a technology play - the technology is in fact the easier piece. In fact, many companies are well down their data platform journey with key decisions behind them on infrastructure and modern data stacks, but data cleansing continues to be "a journey - not a destination". In reality, this is actually a two-part journey - first cleaning up current data sets, and then designing the new data build - to be first-time right. Data engineering (and the work around data management, data governance, data dictionaries, data ontologies and data quality) can almost be likened to "data janitorial services". And ownership, accountability, data strategy, capital allocation, and organization design are super important to get right. Best in class organizations have driven data quality by shifting all operating reviews into the data within core applications only, not derivatives on management slides - and as a result, drive significant focus on fixing core data quality. How you organize for data is also important - great examples of scaled CDO organizations show that there is a data officer in each business reporting up to the CDO accountable for the data within each business unit. 2. Reimagine Business Processes and Manage Change True innovations come at the intersection of disciplines, and with Generative AI it is all the more so. We have learned that unlike automation, where once completed, the processes still remain the same - albeit move faster, cost less and can be scaled better - AI fundamentally changes the process - and the work that remains for human colleagues to do is now entirely different from what they did before. As a result, how we reimagine and redesign the business process because of the toolkits now available - as opposed to retrofitting Generative AI into our existing processes - is a critical component of the pilot to production journey. As is thinking through the change management and team skilling to take on the new work. The other insight across so many Generative AI deployments across industries is that the power of and returns of Generative AI projects do not come from technology alone - they come from the business and IT teams working together. One critical success factor is picking the right business problems to address - effectively crowdsourcing inputs closest to the business, and ensuring teams start with the questions not the answers. The other one is capital (and management mindshare) allocation - balancing large investments in new technologies whilst at the same time reducing costs and legacy debt. The widespread interest across corporate boards in Generative AI is often a great opportunity to drive more investment in transformational projects with AI. Finally, Lean as a methodology serves as a good framework for balancing ideas and execution approach, with a clear focus on driving operating processes redesign. How we become more intentional with change management and employee reskilling so we can drive better adoption is critical to ensuring return on investments in Generative AI. Technology is no longer the long pole in the tent - talent, skills and culture are. The best talent for Generative AI is often dispersed and decentralized - sitting in the business. Prompt engineering skills are generally lacking at the levels needed, and often to be found outside of computer science fields. As a result, preparing talent for Generative AI and the long arc of technology is a critical success factor for corporations. 3. The Right Tech Stack and Governance Framework The evolving landscape of technology players - foundation models, tools and utilities and application providers - requires enterprises to make more strategic and well-reasoned capital allocation strategy decisions around the "many ways to bell a cat" problem. At the highest level being intentional about approach is important - whether to use publicly available tools like ChatGPT, or build your own Generative AI stack within your firewalled cloud structure, prompt engineering and fine-tuned models, or take advantage of Embedded AI solutions - where existing application vendors are pre-packaging Generative AI features within their apps. Often this comes down to timing, control, fit and line of work integration. In addition, when it comes to specific large language model choices for building out applications of Generative AI, more and more enterprises are choosing to hedge their risks - and go with two or more parallel LLM models to mitigate the rapid pace of technology change - what's best today may not be tomorrow. This in turn requires a front-end build with a consistent, great user experience, but with an extensible framework at the back-end using API interfaces that can pull into multiple models. There is also the real concern that with the increasing concentration of power in the provider space - from LLM to GPUs, vendor lock-in is increasing exposure to economic, concentrated security and associated financial risks. Organizations who think through evolving regulatory frameworks and lead a fair, equitable and inclusive implementation of AI will come out ahead. Technology organizations are seeing a significant uptick in regulations related to critical infrastructure, privacy, security, and data in the context of Generative AI - the financial services industry, at the forefront of the regulatory dynamic, provides a glimpse of what's to come more broadly. And the challenge is to work through new laws and ethical use policies in an affordable manner while maintaining speed and innovation. One thing is for sure though, we are only getting started - and what we do going forward will dwarf what we have done so far with Generative AI.
[2]
CXO's Move Beyond Experimentation In GenAI Adoption Game
The boom of generative AI (GenAI) has prompted businesses across banking, finance, retail, travel, telecom and beyond to explore its potential for gaining a competitive edge. As I've previously reported, the technology has been rapidly evolving and improving and now the race is on for corporations to find a fit. The accelerated pace of optionality can be overwhelming, like a rapid-fire game of TetrisĀ®, as CXO's look to deploy these increasingly advanced models in AI-based products and services. Most CXO's I talk with seem frustrated as they continue to experience challenges on the road to reaping GenAI goodness. It is all too easy to box yourself in with the wrong choices or become one of the many highly visible fails that I wrote about in my Forbes story The AI Inflection Point. And, despite the hype around GenAI, a McKinsey & Co. report states that only "11 percent of companies have adopted Gen AI at scale." Why is that? 50 Ways To Leave Your AI Project In The Dust I often write and speak on the importance of taking a strategic approach when rolling out the latest AI. But what does that mean, exactly? What are the rules of the game? And how can things go wrong? They're key questions, as enterprises are struggling to move beyond the initial experimentation stage, with the prototype-to-production lifecycle averaging around eight months, according to Gartner. High costs, concerns over data privacy and security, and technical complexities, including system integration and the need for skilled AI talent, create large barriers to entry. Those are just some of the challenges for CXOs. While it may not be hard to apply GenAI to very targeted areas or groups, tapping and unleashing its power across the enterprise spanning multiple use cases, with just the right models, integration and fine tuning is no small task. Just think about it. Here are some of the boxes CXOs need to check off in accomplishing such a feat. A moving target: Generative AI solutions appear in the market seemingly overnight. Technical teams are challenged to find robust tools that simplify the AI development process from inception to deployment. Model selection and fine tuning: Configuring perfect prompts, adapting the best models, and testing and ensuring an enterprise-ready solution makes it very time-consuming. But it might not be easy to find and use models that are community, open-source or application-specific. Sourcing and implementing inappropriate models can have a detrimental effect on business outcomes. Overcoming integration hurdles: Technical complexities and extended development cycles represent significant obstacles in AI integration. These challenges are primarily due to the need for specialized skills in managing sophisticated AI models and the substantial time investment and staffing required to ensure these systems are both effective and reliable. A fragmented space: Most tools available today are offered as separate solutions, making collaboration difficult. Technical teams must either find something turnkey or cobble together solutions that eventually work together. Avoiding lock-in: The flip side of this are monolithic solutions from big tech, the hyperscalers like Azure, Amazon, Google et al that might check off a number of boxes but also box you in, limiting choices and opportunities for growth. From infrastructure to front-end: You need to cover the bases, regarding integration with cloud, backend systems, data, workflows, and user interface. You need an easy way to generate AI agents and tap APIs. Two critical paths that can throttle projects if not addressed: 1. Progressing from experiment to pilot to production. There are a growing number of options to address the obstacles mentioned above. E.g., cloud environments like HuggingFace, Google Cloud, Azure and others offer extensive cloud and GenAI capabilities. There are also model providers like Cohere, OpenAI and AI21 Labs that have generative AI products which are tied to their models. The LLM/Generative AI market is saturated with flow builders, while data management is gaining overdue attention. Ops-centric frameworks focus on the efficient, reliable, and secure operation of large language models (LLMs) in production, ensuring optimal performance and scalability. Then, there are hub, data and flow-centric platforms that support the development and orchestration of LLM and Generative AI applications, emphasize data discovery, design and development, and build flows for conversational use, respectively. If this all sounds a bit much, remember not all CXO's need to know how AI works -- they just need to learn ways to adopt technology to change the game and transform their business competitiveness. Find Partners To Help Close Your GenAI Adoption Gap Back in January, I wrote about the NVIDIA (NVDA)-backed conversational and generative AI company, Kore.ai, that offers AI platforms for enterprises. After its $150 million funding round, the company seems to have jumped in to tackle head-on the GenAI adoption challenges enterprise CXOs are facing. With its eyes set on surpassing the tech giants and upstart rivals, today Kore.ai announced and launched GALE (Generative AI and LLM Platform for Enterprises), an end-to-end platform enabling enterprises to innovate, experiment and deploy GenAI solutions. According to the company, GALE simplifies GenAI development and purportedly cuts the time by up to 50%, offering a wide-range of use cases that can deliver business value to accelerate adoption. Hey Siri or Alexa, why do we always seem to use female names for GenAI platforms? Anyway, the aim with GALE is to create a collection of essential AI tools for the enterprise. A complete AI productivity suite encompasses various aspects, but currently in the market, these often lack the depth of specialized products targeting specific market areas. GALE captures most all of these elements in a single no-code (my favorite for adoption) AI productivity suite. Could this mean faster innovation and quicker responses to market changes? Hopefully. Business leaders know it's not just about hopping on the new tech bandwagon; it's about giving every team the tools to grow, innovate, and meet customer demands. Again, what I find important here is not to know how it's working, but how tools like these can help CXO leaders make an impact to their business -- preferably impacts to customer value. Winning In The Game Of Enterprise Fit I connected with Vaibhav Bansal, VP of global research firm Everest Group, who was quoted in the release, about what it takes to win in this hyper-competitive, high-stakes game: "As enterprise adoption of GenAI picks up pace, organizations are looking to achieve faster time to value from AI initiatives. They need platforms that offer an easy way to experiment, build, deploy, and scale AI agents and applications. Typical platform features include an AI/ML orchestration layer for efficient model management, the ability to seamlessly connect with enterprise data, a low-code interface, and an application development layer for deploying AI agents or apps. Beyond these, enterprises need access to the best-performing foundation models, provisions for bringing their own models, guardrails to ensure fairness and data security, the ability to fine-tune models on enterprise data, and pre-configured GenAI libraries and frameworks for different use cases. Enterprise-grade GenAI platforms that support such features promise an exciting future and have the potential to be the one-stop suite for all enterprise AI needs." Will GALE shake up the game? Time will tell, but one competitive differentiator seems to be the freedom of choice it brings to the table for models, deployment options, data or applications. Another is the ability to experiment, tune, integrate and deploy at enterprise scale much more quickly than other methods. Since CXOs need ready-to-go models that they can play with to fit their business needs, it plays nice with any LLM you throw at it -- commercial, open-source, or custom -- giving the oh-so-desired flexibility to adapt as things evolve. The bottom line is that enterprise CXOs have been waiting for platforms like these -- an AI starting point that simplifies the complexities of GenAI building, accelerates adoption, and enhances business processes for long-term value. It is an exciting time to be a CXO enterprise leader -- let the next-level games begin.
[3]
Dell Technologies BrandVoice: Generative AI Business Value Emerges In Diverse Use Cases
Organizations are leveraging generative AI to improve health-care quality, railway operations and critical communications. 2024 was earmarked as the year generative AI would help organizations realize productivity gains, cost reduction and even revenue generation. Progress has been promising. Sixty-five percent of businesses regularly use GenAI in at least one business function -- double from a year ago, according to this McKinsey report. The average organization uses GenAI in two functions, most often in marketing and sales and in product and service development, the consultancy found. Some organizations are seeing more significant benefits from their GenAI investments that are leading to tangible human progress, including better patient outcomes, enhanced public information services and even increased safety. Northwestern Medicine, a non-profit healthcare system, has embraced GenAI to improve the quality of patient care at its 11 hospitals. The healthcare organization is using Automated Radiology Interpretation and Evaluation System (ARIES), a multimodal small language model, to quickly review chest X-ray images, rapidly providing physicians with diagnostic findings and detecting anomalies that traditionally took hours of review. ARIES enables radiologists to interpret the images, identify the most critical patients and address their health issues faster. It's also proving to be rocket fuel for physician productivity, providing a 40% efficiency lift. "When one of our more junior radiologists first worked with ARIES, it took his productivity level to that of someone with 15 or 20 years more experience -- without any drop-off in quality," said Dr. Samir Abboud, chief of emergency radiology at Northwestern Medicine. Switching gears to transportation, railway operations are notoriously manual enterprises, but some companies are working to change that. Duos Technologies is using AI and GenAI to help organizations augment the safety of railway operations. The company developed an automated Railcar Inspection Portal (ripĀ®) with a trackside edge data center that uses AI to inspect trains in real time, improving inspection efficiency and safety. AI enables ripĀ® to capture and analyze 360-degree images of every train car in just seconds, including passenger cars traveling over 125 miles per hour. The automated approach has boosted inspection accuracy by 8x, while providing a 120x inspection performance boost over manual operations. "By freeing up more people to be fixers instead of finders, railroads decrease their dwell time and improve their bottom line", said Mark Smith, chief mechanical officer of Duos Technologies. Leaning into its AI capabilities, Duos is currently building a GenAI system that will analyze pictures of a part when it's new and when it's broken and create images that depict what it looks like just before it breaks. The company's AI models can use these images to improve inspection accuracy through prediction. Digital assistants continue to provide a convenient on-ramp to democratizing information access for citizens. Exhibit A is the City of Amarillo in Texas, which uses AI to democratize access to city services for the 24% of its population which does not speak English. Sixty-two languages and dialects are spoken at one middle school alone. "How do you bridge that communication gap and build relations with those communities?" said Rich Gagon, the City of Amarillo's CIO and assistant city manager. Gagnon and his staff provided an answer in creating Emma, a GenAI digital assistant that answers citizen and visitor questions about park facilities and other nonemergency information in multiple languages, all from the city's website. Whereas most chatbots are faceless programs, Emma is a large language model-powered "digital human," a virtual representation of a person citizens can interact with more naturally through the city's website. It's clear that organizations across diverse sectors are realizing real business value from GenAI. What do Northwestern Medicine, Duos Technologies and the City of Amarillo have in common? A trusted partner: Dell Technologies. Northwestern Medicine partnered with Dell's HPC & AI Innovation Lab to build and test workflow solutions prior to deploying them in their own IT environment, via a secure colocation facility. Northwestern Medicine is currently working on several other projects that will use Dell's AI infrastructure, including building a predictive model for its electronic medical record system. Duos is using Dell AI edge and data center solutions. IT staff remotely manage and update Dell PowerEdge servers using automated processes and proactive maintenance strategies enabled by the iDRAC management platform. They also automate parts-ordering processes through Dell ProSupport Plus. Repairs that took days to complete using previous solutions now take minutes. In Amarillo, GenAI consultants from Dell Professional Services helped advise Gagnon and his team on a technology roadmap for building Emma and managing the data in the city's LLMs and GenAI applications. Dell is leaning into its strength as an end-to-end solution provider for customers' AI initiatives. Key to this is the Dell AI Factory, which spans data, infrastructure, professional services, an open ecosystem and use cases. The Dell AI Factory uses modular infrastructure to give organizations the flexibility to adapt as they seek to achieve their desired business outcomes. Meanwhile, Dell's professional services organization helps organizations prepare their data and identify and execute use cases. The Dell AI Factory provides the path for organizations to produce repeatable results as they work to create content, automate operations and generate insights at scale.
[4]
IBM InstructLab And Granite Models Revolutionizing LLM Training
In the course of human endeavors, it has become clear that humans have the capacity to accelerate learning by taking foundational concepts initially proposed by some of humanity's greatest minds and building upon them. This concept was famously articulated by Sir Isaac Newton when he stated, "If I have seen further, it is by standing on the shoulders of giants". Ironically, since he first wrote this in a letter to Robert Hooke in 1675, he is now widely regarded as one of those giants on whose shoulders humanity's progress has been built. Just like many of his contributions in the field of Physics, his sentiment has been proven many times over. Mathematical and scientific concepts that were once only the purview of PhDs are now being taught sometimes as early as elementary school - concepts like algebra, geometry and even the basics of thermodynamics. One of the keys to this type of educational acceleration is the use of examples, whether real or hypothetical, to demonstrate, reinforce and apply the learned concepts. Likewise, humans are now applying this concept in the field of generative AI. As generative AI transitions from the experimentation phase into its value creation phase, the way foundation models are trained are also evolving. Just as humans learn how to learn as they become more sophisticated in a given subject, so too have some, like IBM Research teams in conjunction with their Red Hat counterparts, started to evolve how generative AI models learn with their recently launched InstructLab. In so doing, they are demonstrating significant acceleration in how foundational models can be customized for specific tasks. Unlocking A New Way To Train InstructLab is an open-source project which aims to lower the cost of fine-tuning LLMs by enabling the ability to integrate changes to an LLM without the need to fully retrain the entire foundation model. According to a recent IBM blog post, the key to enabling this is not only using human-curated data and examples but also augmenting it with high quality synthetic examples generated by an LLM that mirror real-world data. Just like with humans, these examples provide a solid foundation for learning about the topic, thus significantly improving the model in a specific domain without having to fully retrain the core model. These synthetic data examples can help companies save time, effort and funds that they would need to spend to generate real data. By utilizing synthetic data, the InstructLab technique brings a new level of scale to customizing models. With its recently released family of Granite models, using InstructLab, IBM was able to demonstrate a 20% higher code generation score along with a reduction in time it takes to achieve that quality. In a blog post summarizing IBM Research Director, Dario Gil's keynote at this year's Think conference, "Gil said that when IBM's Granite code models were being trained on translating COBOL to Java, they had 14 rounds of fine-tuning that took nine months. Using InstructLab, the team added newly fine-tuned COBOL skills in a week, requiring only one round of tuning to achieve better performance." This was achieved by using examples of human written, paired COBOL-Java programs as seed data. The seed data was then augmented by using InstructLab to convert an IBM Z manual and various programming textbooks into additional, synthetically generated COBOL-Java pairs. The new data was then fed into the core Granite model resulting in the aforementioned fine-tuning acceleration. For reference, Granite models are IBM's family of large language models (LLMs) aimed at improving the productivity of human programmers. These LLMs come in different parameter sizes and apply generative AI to multiple modalities, including language and code. Granite foundation models are being fine-tuned to create assistants that help with translating code from legacy languages to current ones, debugging code and writing novel code based on plain English instructions. With IBMs focus on enterprise-class generative AI, Granite models have been trained on datasets encompassing not only code generation but also datasets covering academics, legal and finance. Standing On The Shoulders Of Giants It is clear that the large scale training of new foundation models has had a profound impact on generative AI and what humanity can do with those models. It is now time to build on that impact as foundation models are brought to bear on real world use cases and applications that provide value - especially for enterprises. However, standard training methods used to develop those foundation models require enormous amounts of data center resources requiring substantial capital as well as operational costs. In order for these foundation models to deliver on the promise of generative AI, companies need to rethink their model training processes. To deploy AI models at scale, fine-tuning techniques need to evolve to include more domain-specific data at a lower cost. With the results that have been demonstrated so far, it appears that IBM and Red Hat's InstructLab project is doing just that. Time will tell exactly how far enterprises will be able to see standing on the shoulders of these particular giants.
Share
Share
Copy Link
As businesses move beyond the pilot phase of generative AI, key lessons emerge on successful implementation. CXOs are adopting strategic approaches, while diverse use cases demonstrate tangible business value across industries.
As organizations move beyond the experimental phase of generative AI (GenAI), they are encountering both challenges and opportunities in scaling their initiatives. A recent study by Forbes Insights and Deloitte reveals that while 79% of executives believe GenAI will substantially impact their organizations, only 45% have moved beyond the pilot stage 1. This transition from pilot to production is proving to be a critical juncture for businesses seeking to harness the full potential of GenAI.
Three primary lessons have emerged for organizations looking to scale their GenAI initiatives:
Companies that have successfully navigated this transition emphasize the importance of a robust data strategy, clear alignment with business objectives, and proactive risk management 1.
As GenAI adoption gains momentum, C-suite executives are moving beyond experimentation and adopting more strategic approaches. This shift involves:
CXOs are increasingly recognizing the need for a holistic approach that integrates GenAI into their overall business strategy, rather than treating it as a standalone technology initiative 2.
The business value of GenAI is becoming evident across various industries and functions. Some notable use cases include:
These applications are not only enhancing efficiency but also driving innovation and creating new revenue streams 3.
IBM's recent introduction of InstructLab and Granite models represents a significant advancement in large language model (LLM) training. These innovations aim to:
This development could potentially democratize access to advanced AI capabilities, allowing a broader range of businesses to leverage GenAI technologies 4.
As generative AI continues to evolve, organizations that successfully navigate the transition from pilot to production, adopt strategic approaches, and leverage diverse use cases are likely to gain a competitive edge in the rapidly changing business landscape.
Reference
Generative AI is revolutionizing industries, from executive strategies to consumer products. This story explores its impact on business value, employee productivity, and the challenges in building interactive AI systems.
6 Sources
6 Sources
A comprehensive look at the current state of AI adoption in enterprises, covering early successes, ROI challenges, and the growing importance of edge computing in AI deployments.
4 Sources
4 Sources
A comprehensive look at how businesses can effectively implement AI, particularly generative AI, while avoiding common pitfalls and ensuring strategic value.
3 Sources
3 Sources
An in-depth look at the challenges and opportunities facing enterprises as they scale their AI operations in 2025, including the build vs. buy dilemma, emerging AI technologies, and cost considerations.
2 Sources
2 Sources
An in-depth look at how businesses can effectively implement AI and GenAI technologies to drive innovation, boost productivity, and create new value propositions, while navigating the challenges of infrastructure, governance, and sustainability.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
Ā© 2025 TheOutpost.AI All rights reserved