The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Tue, 3 Dec, 12:02 AM UTC
4 Sources
[1]
ASUG Tech Connect brings clarity to LLM accuracy - and shows how SAP's GenAI Hub can bring AI to all customers
In truth, AI wasn't the top issue at ASUG Tech Connect. A compelling pit stop on SAP's TechEd on Tour, ASUG Tech Connect lured a mix of tech leaders, architects and developers to West Palm Beach. With twice the attendees of the inaugural Tech Connect event last year, these sessions struck a chord. The hottest topics included: In my getting-real-about-enterprise-architects session with analyst Josh Greenbaum, we grappled with all of the above, spurred by the sharp questions of attendees. The biggest takeaway? Companies that chase the Enterprise Architect superhero skill set are on the wrong track. Modern Enterprise Architecture is a team sport, best informed by a Center of Excellent mindset (I also hit on these issues from a developer angle, via my podcast, ASUG Tech Connect review - Jelena Perfiljeva of Boring Enterprise Nerds). The best SAP pros have long since mastered how to walk and chew gum, and that was on display with AI. During a live podcast taping with ASUG CEO Geoff Scott, Greenbaum and myself, the audience asked us to focus on Enterprise Architects, not AI. But the ASUG Tech Connect AI sessions were also well-attended. And: the presenters from SAP did much better than I was expecting on dialing back the AI sales pitch, and taking attendees under the hood (example: one SAP presenter acknowledged something I have yet to hear any other vendor say this entire year: that LLMs, at times, ignore the data provided in the context window, in favor of their own LLM "answers"). This is a crucial problem/challenge for enterprise AI. When vendors use external LLMs in their AI architecture, they will rely heavily on the context window, supplemented by data via Retrieval Augmented Generation, to achieve a more accurate/relevant result. SAP has useful pursuits to try to minimize this issue, including a research project with Stanford, but I give SAP credit for naming it, whereas most vendors have given attendees the runaround on this. After ASUG Tech Connect, I caught up with Dr. Walter Sun, SAP's Global Head of Artificial Intelligence, who co-presented some AI sessions on site, one of which I attended. So for those who haven't tracked SAP's AI news since SAP Sapphire, what would Sun pick from the deluge of SAP AI announcements? As Sun told me: We now have an SDK available for developers, and we're adding support to JavaScript - and then early next year, ABAP AI, which is the important area of letting our ecosystem of developers who are using ABAP to get AI developer tools. Existing capabilities like GitHub co pilot don't have the support for ABAP. Generative AI Hub,, which is our AI core that has all the different models - we have more than 25 models now, and so we're giving our customers and developers more choice, in terms of which ones you use. We're in a process of building benchmarks - as well business benchmarks - to help inform which models are best. Sun believes SAP has put customers in a good position with AI agents: I know you heard us talk about the AI expert agents. We did a demonstration of a live multi-agent scenario at Sapphire in June... But we actually doubled down on it again at TechEd and last week, talking about how we are thinking that these expert AI agents are really helpful, because they can be specialized and focused on specific tasks. So therefore, having a handful of many agents working together to solve a task can get more done than having one generalist agent - the typical original, single, Large Language Model that handles the all the problems. And that's what we think is exciting for next year. Vendors like SAP are trying to take the risk/uncertainty off AI projects by embedding AI capabilities - so customers can consume AI without launching their own projects. But customers still need guidance on AI use case selection. If they pursue an AI project that goes awry, they lose footing with leadership. I asked Sun: is it better to start with something more conservative/out-of-the-box like job description generation, or is it better to involve internal teams in white board ideation? Sun responded: I think the best way is to look at scenarios where there's a lot more human interaction. Everything is still human in the loop, but I think starting with tasks - job description creation is a good one - or other ones. There's a tool in SuccessFactors which helps you give work feedback. It's like you and I were co-workers, and someone asked you, 'Jon, give feedback on Walter' - and you can use generative AI to help curate your response. I think that's useful, because you are always in the loop. But basically you say, 'Hey, I want to write, or can you please elaborate or make my description more elaborate, or make it more professional.' And you get a new response, and you can double check that response. Sun advises holding back on more ambitious use cases, until there is more comfort with AI tool use. That means holding off on total automation as well. I think that gets everyone comfortable with what's happening. If you build too many use cases which are either under the hood or hidden - that's kind of scary to people. If you actually do things that are very automated, where it just goes ahead and does a lot of different things, you're not really sure what's happening, right? So if you have something that's maybe not low stakes, but maybe safer, so you don't feel as if you're taking too many risks in terms of leveraging the technology, you can actually start to get more comfortable. Build up to mission-critical use cases: Basically, you start small, you get more comfortable. You realize what AI can and cannot do, and then you learn how much you need to be involved. After that, you can start with more mission-critical things. Getting to the bottom of how Large Language Models recognize (or ignore) the context window is not just a geeky question; it's a crucial topic - especially since we hear such different claims from enterprise vendors. So I hit the ASUG Tech Connect show floor, in search of novel approaches. There, I ran into sovanta AG, a BTP-focused SAP partner. I wrote about sovanta at last year's Tech Connect also. Given that SAP customers need alternatives to code customization, services firms that focus only on BTP are still a rare (and welcome) option. To give customers quick access to their BTP services, sovanta has built a "BTP Accelerator," where users can navigate BTP services and templates, according to role/security permissions. You could think of this as providing a governance framework, and a way to move towards a BTP-COE model (This also provides a means for project managers to monitor BTP services initiatives). Since last year, the sovanta team has been rolling out ML-enabled use cases. This time around, sovanta showed me a demo of an agentic AI type of use case, a generative bot that answers questions about documents, such as HR policy (this is called their GenAI DocumentChat on SAP BTP). As Cody Wedl, President, sovanta America told me: We also have developed a document chat, because right now, SAP doesn't have an out-of-the-box system to manually upload PDFs in a GDPR-compliant way to analyze documents. What we did is: we actually tied our innovation factory into this. So if I want to build an agenda for a workshop, all I do is I type in, 'Write me an email, or write me a workshop template,' and it'll go into our entire repository, and I can start generating more content with our LLM in here, and it's all GDPR-compliant. Or I can upload all of my HR documents from my organization, and I can ask, 'Can I fly to ASUG business class?' The answers come directly from the uploaded policy documents. Wedl walked me through it: Here, it's analyzing it. It says, 'No - you can only buy economy. You cannot buy business class.' And it actually references where it is within our policies. What's really great about this? It's out of the box. We can actually deliver this to customers within one week, and they can tap into any type of repository they want, in a GDPR-compliant way. Tommi Kramer, sovanta's Chief AI Officer, joined the demo: This is all built on BTP, and then we can actually use the BTP language tokens to bring in the Large Language Model that the customer prefers, so complex documents like contracts or supply chain - especially if you're talking about cross-border supply chain issues - you can upload all those documents here. It's a great start for organizations to introduce gen AI in a GDPR-compliant way for their organizations, to safely use gen AI tooling right away. This solution utilizes a well-known technique for optimizing RAG, called document chunking. Get chunking right, and you can optimize RAG document retrieval. In this case, sovanta's document chat 'chunks' the document text into the HANA Vector Engine. Then, users can query the system via sovanta's GenAI DocumentChat interface - and the external LLM "combines" the RAG context from the HANA Vector Engine for the output. Because no LLM fine tuning is required, this allows customers to quickly access their choice of external LLMs, via SAP's Gen AI Hub. Kramer explains: The good part is we use the BTP's language model [access], because BTP with SAP's Gen AI Hub has different ways of asking Large Language Models. You can use the public APIs of Open AI, and all the LLM companies. Or you can use Mistral, the open source language model that can be hosted, so you have data privacy covered for GDPR, because that's a niche thing for Germany, which is very data-protective [Author's note: SAP expanded its partnership with Mistral in October, for similar reasons]. Time to ask sovanta the burning question: LLMs sometimes ignore the provided context and freelance - or they "combine" it, as sovanta put it - but with non-relevant information. How does sovanta handle this? Kramer's candid answer includes prompt engineering: The context window basically is what we can control, because we live here in this context. The second one is hallucinations, right? So the thing is, we told this model - we reworked the prompt. Every time you send it out to the Large Language Model we say, 'Give an answer to the user, but only use answers from the documents. If you don't find an answer in one of the documents, tell the user that you don't have an answer.' Another crucial benefit to this type of architecture? You can get a level of explainability you simply cannot get from the LLM itself. For customers in regulated industries/regions, explainability-for-compliance is likely to be required for AI adoption. Kramer: The good thing is, it always tells you where, on which page, you can find that particular connection. We did that because that was one of the main issues: people didn't want to have an answer which is hallucinated. They want to have a clear answer. They want to have it formulated or summarized, and they want to see where the original text is from - especially because we did this also with German law text - German law is publicly available on the website of the government. We went in and made law texts understandable, and then you always need to refer or reference where this is from in the original text, because you at least need to make sure, if they don't understand the original text right at the detail, they can at least use this as a consultation. Here is a quick screen shot of sovanta's gen AI document chatbot, and our attempts to "break" it: As you can see, we attempted to provoke a hallucination by asking, re: ASUG Tech Connect travel, Can Tommi fly by helicopter? This question is almost certainly outside the scope of the travel policy, which might nudge the LLM towards surfacing its own answer. But instead, we get: No, Tommi cannot fly with a helicopter. The provided context information only mentions flights in standard class (economy).... Whatever sovanta is putting into the prompt instruction, the LLM is taking that instruction to the letter, at least for this question. (It's worth noting that the LLM, while providing the right answer, doesn't recognize the absurdity of taking a helicopter from overseas to Florida - an important reminder that "intelligence" in this era of generative AI is a more relative thing than the hype implies. Of course, that doesn't negate the usefulness of these types of apps. Sidenote on explainability - you can see the source documents for the output answer listed in the right hand column). It was good to hear sovanta talk about their strong partnership with SAP on GenAI DocumentChat. Too often in the last year, SAP has given the impression that you have to be on RISE (or GROW) to access AI functionality on the SAP platform. In terms of getting all customers access to SAP AI services, that's not a useful perception. This, on the other hand, is a great example of how a BTP-based approach, along with SAP's GenAI Hub, gives all customers a way of getting started with SAP and AI now (sovanta says the app will soon be on the SAP store). I believe that will actually help fuel SAP's desired engagement around RISE/GROW, by giving customers confidence in SAP as an AI/innovation partner. Of course, your answers are only as good as your data. Policy documents are really living documents. As Greenbaum pointed out during our demo, the costs of getting that wrong can be dire in fines/penalties. Here, sovanta has answers, including an adapter with Confluence to keep these living documents up to date. I agree with sovanta: the answers from this solution are not necessarily going to be 100 percent compliant, but as long as the solution itself can be shown to be sourcing the correct/up to date documents, I suspect you are checking the right boxes, though I'm not here to certify or audit any results. Now, this does not necessarily solve the LLM/context window issue. I still see no consensus from experts on this. However, I do think you can get even closer with specifically-trained models - in particular, with smaller models (SLMs). This is vital for agentic AI, which will often start with smaller, agent-specific tasks that can be 'orchestrated' together. When I asked Sun about this problem, that's where he went first: If you think about AI expert agents, that's the whole idea. You can actually have an expert agent, a hotel travel agent, car rental agent - you can train and tune these models to be much smaller, and they're really focused on, say, the best hotels around the world. It doesn't need to know about dinosaurs. It doesn't need to know about technology. So if you ask it something that's not about a hotel, it's just going to say, 'My job is to book hotels.' Sun's other tips for keeping LLM output on track, beyond RAG? Prompt instructions (not visible to the typical user, which sovanta has utilized as well), and: reducing the so-called "temperature" settings on LLMs (a higher temperature is perhaps useful for more creative output, but not for the kinds of use cases I've detailed here). During our talk, Sun referred to SAP's AI ethics assessment framework, another key aspect of AI use case selection. I've covered SAP's approach to AI risk mitigation in the past. For this piece, here is a slide Sun covered during his AI ethics session at ASUG Tech Connect: Customers could benefit further from SAP's sophistication around AI risk management; why not give customers' internal teams access to AI ethics tools, so they can calibrate their own use cases? As usual, I've run out of space before I've run out of topics. In brief, Sun says SAP is still working on its own foundation model. This would potentially include opt-in, anonymized/aggregated customer data - a topic I'll return to. One way this could hone model relevance/accuracy? By providing RAG access to knowledge graphs, a better format for tabular/transactional enterprise data - something vector databases can struggle with. Then there is the question of how SAP's AI investments line up with its ESG commitments. There may be ways AI can help with ESG, but with Google, Microsoft, and Amazon all pursuing nuclear power expansion due to AI needs, it's clear that the tension between AI market share and energy consumption isn't going away soon. I don't believe there are easy answers here, though the intentional use of smaller models - models no bigger than the task/process needed, could help. Sun thinks SAP can balance AI innovation and ESG commitments using these types of techniques. Finally, one BTP expert at Tech Connect expressed concerns that SAP's Gen AI Hub is not necessarily providing access to the latest LLM releases within Gen AI Hub. When I brought this up with Sun, he characterized this as a solvable problem SAP is working on (one part of that solution could be early access to new LLM iterations via SAP's LLM partners). The questions I pursued here are moving targets. Still, it's good to see SAP and its partners tackling issues of AI design and output relevance - and doing it transparently. I'm not sure why so many AI vendors are wary of acknowledging these open questions, but I believe customer trust (and therefore AI adoption) is tied to a more open and precise discussion than we usually have. Even as a work in progress, these solutions are far more appealing than the consumer LLMs, and the outrageous trouble they get into by focusing more on "emergent" AI magic than disciplined results. Next year, I think we'll hear much more about gen AI results, and how to get them - which is different, by the way, than ROI. To be continued...
[2]
The enterprise stories we need for 2025
How do vendors get long-time ERP customers to embrace cloud and AI versions of their software? This issue gets lots of analyst/executive discussion at different ERP briefings. It's a problem for mature vendors as it costs them a small fortune each year to support old products. Vendors don't want to EOL (end of life) products but at some point, it's just not practical or realistic to keep pouring capital into products that were relevant in a different time and different computing world. For the vendor, all of that capital to enhance, keep technically current, etc. an obsolete solution is capital that could go towards newer solutions, pay down corporate debt or to provide shareholder income. This isn't a new problem, though. Mainframe product owners wanted their customers to move to newer client server products. It took seemingly forever and few vendors could build migration paths that were easy, fast and cost effective. We saw the same thing a few years later when client-server solutions yielded to newer cloud apps. And now, those cloud apps are ceding ground to AI-powered, public-cloud solutions. In each migration cycle, vendors have to incent, cajole and/or threaten customers to move to more technically current and, hopefully, more functionally powerful solutions. All that vendor prodding doesn't fully work as some percentage of customers just won't move or won't move along the timetable that the vendor would like. The real problem is more nuanced than that. Some of the reasons are not within the control of the vendor (e.g., the customer has insufficient capital, the customer has other more important projects to complete first, etc.). And while vendors can help with some aspects of the migration (e.g., creating data migration tools), there are finite limits to what they can influence. Could 2025 be the year vendors pour energy and R&D funds into using AI to make these conversions exceptionally fast and low cost? We shall see. Just this week, I read two stories extolling the benefits of WFH or hybrid work and a very different one complaining of remote workers being slackers. Why is this issue so hard to pin down? What is needed to get a definitive answer to the question: Is WFH a good or bad thing? Top executives that advocate for having people in the office often state that in-person work facilitates cross-team communications, collaboration and accidental brainstorm moments. They also say it's essential to creating/reinforcing the company's culture. In contrast, there are others who point out that WFH is super expensive for employees (e.g., one recent report says its cost to employees are the equivalent of the cost of food for a month) and it creates an adverse culture. What is clear is that WFH can work in some work situations and is inappropriate in others. It can't work in a manufacturing setting as the equipment and workforce have to be in the same place. It can work for many other roles (e.g., call center support roles) and the pandemic provided all manner of proof points for this. What few want to discuss are the soft side issues that muck up the issue. On one extreme we have control-freak, power-mad and/or untrusting bosses who like to see all their minions in the office. It's a power trip for these people and the 'culture' they seem to care most about is the one that makes them look like they control the lives and well-being of countless drone worker bees. On the other hand, there are trusting leaders who put in appropriate controls to make sure WFH works and works well. But, in all cases, there will be employees who don't like the arrangement or want to cheat. WFH or RTO is not an all or nothing proposition. I'm tired of those arguments. For 2025, let's get some realism and solid science into this issue for a change. That would be a story I'd really love. We've known for some time that AI technology is prone to hallucinations, aberrant behavior, etc. Software vendors, knowing this, suggested that AI-powered processes should have a 'person in the middle' to help research and correct these anomalous events. But, when you look at the new AI tools in application software today, it's telling how there is no way for users to report these aberrant AI results. The feedback mechanism is non-existent. There's no way to capture and report the bad AI acts/recommendations. The new AI tech is tucked into an existing process where the reviews/controls/oversight were never part of the process. The issue is worse for smaller employers as they may not have anyone on staff who knows anything about LLMs, algorithms, etc. let alone how to fix one that goes rogue. So, vendors, if you're going to wax poetically to buyers, customers and analysts about your AI capabilities, you need to have a solid response to this issue. Maybe you could get away with no or half-baked responses to this issue in 2024 but you won't in 2025. For much of 2023 and 2024, software vendors have been doggedly enigmatic about what new AI capabilities will cost. Even those vendors who proclaimed their AI products would be part of the core solution and therefore trigger no incremental cost often placed a few caveats with those statements. Some vendors tried to delineate certain AI tools as free with other, more advanced AI solutions as having an added cost. And as ambiguous as that has been, no vendor has had the courage to tell us what the sustainability impacts of their tools are. Given the energy and water consumption that AI technologies consume (and growing regulatory reporting requirements for same), businesses should be getting hard numbers and/or pre-populated ROI calculators for their AI-enhanced software purchases. Alas, that's not happening in 2024. In 2025, vendors need to provide more guidance to customers and prospects regarding projected operating costs, environmental costs, etc. of their AI-enhanced solutions. As a business person, I won't buy products whose costs are unknown or can't be estimated. I'm not in the business of signing blank checks and I doubt many executives are either. Businesses can't budget, plan or forecast without knowing what things costs and what the drivers behind those costs are. If AI usage is to ever move from experimental to widespread usage, businesses in 2025 will need a better set of cost tools from vendors. Without visibility into AI costs, no executive can determine the ROI of the technology. And, without a solid ROI story, operating committees are loathe to green light these initiatives. To recap, we need cost data, ROI templates/estimating factors, benchmarks, etc. before AI deals can really soar. It's been said that ignorance is bliss but it's not a best practice. (Read that once more for effect) The same kinds of tools that vendors are using to bring AI to their applications are also being used by people outside of your firm and these users are not all benign nor do they play by the rules. So-so jobseekers are 'perfecting' their resume to game your applicant tracking/scoring systems. Other jobseekers are using tools to resume spam thousands of job sites every week. Thieves are using AI to generate amazing clones of legitimate invoices that could fool your Accounts Payable system. Unfortunately, vendor after vendor is not looking at this problem (i.e., the abuse of citizen-AI tools) or even acknowledging its prevalence. Feigning ignorance of the problem doesn't make it go away. One reason for this dismissal of this malevolent AI activity is that the acknowledgement of the problem could lead a vendor to radically rethink the processes present in their ERP solutions. If talent acquisition processes, for example, are so AI-compromised, then a vendor would need to create a new kind of talent acquisition process that is more bulletproof or at least make the current process better protected for the time being. In 2025, the problem will likely become critical and we'll see some functions experience horrible issues. If customers have to scream and shout to get vendors to deal with this, it casts real doubt on the vision, strategy and value that vendors bring to the table. You can't sprinkle bits of AI across old apps and think the job is complete. AI changes things for software buyers and outsiders. Will 2025 be the year that vendors wake up and smell the coffee re: citizen-AI and its effects? Though 2024, we mostly saw vendors add bits and pieces of AI into old process workflows or used AI to make an old workflow a bit more efficient. When we have seen AI-enhanced processes at vendor events, the solutions have often been incremental and not something we were allowed to play with. There hasn't been much in terms of documentation of these AI-enhanced processes. In some situations, it's because the vendor hasn't really made the AI tools in it bulletproof yet (or even generally available). One vendor's smart chat tool could only respond to four canned prompts. For customers to really assess how AI is changing the way they will work, they need better documentation. They especially need vendors to compare/contrast the new proposed work flow to old industry standards. They need metrics to come along with the new processes so people can document the expected ROI. But, through it all, they need to understand specifically what their roles will be in this new world. While the above will likely consume 2025, I would like to also see vendors show customers the flip-side of these AI-enhanced processes. Specifically, can vendors show what third parties (e.g., jobseekers, criminals, alumni, students, vendors, regulators, etc.) will be doing to their parts of these processes and will those impacts be value-creating or wealth destroying? That's the insight that will make software buyers appreciate their investment in a vendor that thinks of all of the angles. Putting all of the above in context, one can see that the transition to AI solutions requires a rethink of the enterprise applications space. It is not an incremental change opportunity and approaching the space in itty-bitty baby steps may be incorrect. Yes, some incremental AI changes could be helpful but this is one of those times where smart vendors are going back to the drawing board. They are reexamining old assumptions, doing new research, floating all-new ideas/concepts with trusted customers and devising new solutions with the potential to drive outsized and long-lasting value. Bold thinking is key. But it's a skillset that's not necessarily in every vendor.
[3]
How to survive the looming AI avalanche of enterprise automation
The latest advances in AI have brought us to the brink of a massive upsurge in enterprise automation. But will this deliver the promised benefits? AI is still subject to all the same caveats that apply to any new technology. It will take longer than we expect to make a huge difference -- per Bill Gates' adage that we overestimate what we can achieve in one year but underestimate what can be done in ten. In the first few years, people will apply the technology to speed up what they already do rather than use it for true innovation -- the horseless carriage syndrome. As a consequence of these two factors, much of the investment in AI in the next few years will be wasted. But just like John Wanamaker's spend on advertising, the difficulty for enterprises will be knowing which investments will turn out to have been in vain. One thing's certain. AI is making it dramatically easier for people to automate the work they do. It's now possible to create an agent to perform a task much faster and more easily than ever before. Whereas in the past, building an automation meant mapping out every possible instruction and carefully linking each of them to potential actions, generative AI cuts out a lot of that manual effort. It is able to understand everyday language and automatically figure out how to work with data and functions it finds in the underlying system to deliver what the user is asking for. This means that developers can build automations far more quickly than before, and in many cases non-developers can use no-code tooling to build automations themselves without even having to involve developers. It sounds like a huge step forward, but is a rapid roll-out of new automation necessarily going to be a good thing? At first sight it seems very welcome. When learning how enterprises have been adopting AI over the past few months, one thing that's struck me is that many of the examples are familiar automation challenges. Rather than enabling new automations, it seems that AI is mostly used to make it easier and quicker to introduce automations that were already long overdue. This leads to a huge amount of optimism around AI. First of all, it brings new hope of clearing up the longstanding backlog of pent-up demand from people across the enterprise to just be able to get stuff done faster and with less faff. Secondly, it creates an overwhelming impression that AI is delivering an immediate and impactful return on investment. This is somewhat misleading. These are automations that always were needed, and could have been delivered earlier if they'd been prioritized, but were denied the resources, budget or motivation. AI simply lowered the bar to getting them done. The ROI comes not because of AI per se, but as a result of the automation of previously manual processes. Nevertheless, AI swoops in at the last moment and gets all the credit. All of this further fuels the hype around AI, encouraging enterprises to unleash the technology as rapidly as possible and accelerate the pace of automation. Trouble is, however overdue these automations may be, many of them will quickly become redundant as the technology moves forward. The most obvious route today may not turn out to be the best choice in hindsight. Unleashing an avalanche of automation without proper forethought or co-ordination will have unpredictable and disruptive results. The enterprise landscape will be much changed, but not necessarily for the better. The biggest danger comes from automating processes that were already suboptimal. In the past, I've often questioned the utility of transforming paper forms into manually completed PDFs that are digitally signed, when the system can automatically present an action for approval and has already validated their identity when they signed in. Similarly, there's little value today in helping employees write better emails when a better process would eliminate the need to send an email in the first place. Helping people draft reports more efficiently risks adding to a mountain of reports that no-one ever reads or acts upon. Before automating an existing process, the first question should be, do we even need this process given the technology and connectivity we now have available? This risk is compounded once automation is democratized through the dissemination of no-code tools. With everyone suddenly able to automate their own processes, there will soon be multiple automations across the enterprise, each doing essentially the same thing but in very different ways -- the vast majority of which will be hugely inefficient. Developers often talk about the concept of technical debt, where successive enhancements and modifications are layered on top of each other and over time lead to inefficiencies and potential conflicts. The same pattern exists in process debt -- layering new automations on top of each other without revisiting the underlying processes may achieve little more than speeding up wasted effort. In both cases, it's important to take stock every so often and work out where the edifice needs to be rebuilt to streamline operations. Ultimately, enterprises need to consider radically new ways of doing things that are enabled by the technology. For example, spend management vendor Coupa is currently looking to build an AI-powered supplier collaboration platform that will bypass the need to exchange traditional RFPs, purchase orders and invoices because its autonomous agents will be able to negotiate and complete those transactions. Salvatore Lombardo, Chief Product and Technology Officer at Coupa, told me: If you use AI and just ask the question, 'What can I automate tomorrow?', you will gain a little bit of value out of it. But this is not the real disruption. No, the real disruption is combining it with the idea of collaboration, inventing collaboration objects which know each other [and] have the data set, which they can call, discuss, and create things with each other because they're intelligent agents as bots, talking to each other, so to speak. This is just one example out of many innovative approaches that are currently under development, but which will not come to market in the immediate future. Enterprises therefore need to choose a careful path in their adoption of AI. On the one hand, the pace of change is terrifying. As I recently noted when considering the challenges Microsoft face in bringing AI to its customers at a time when the technology is doubling in performance every six months: People tend to underestimate the compounding effect of scaling at that pace. Fail to act and your competitor will have a 10x advantage in less than two years, a 50x advantage in less than three, and a 100x advantage six months later. But if you make the wrong choice, you'll not only fall behind just as much, but lose your investment too. Timing is everything. There's certainly some low-hanging fruit that enterprises should seize immediately -- overdue automations that now become affordable or feasible thanks to the new capabilities that AI brings, and which deliver significant business value. But a headlong rush to trigger an avalanche of automation risks burying the organization in a mass of contradictory, wasteful processes that defeat the ultimate object of cutting costs or increasing output. One of the hallmarks of the long-term impact of any new technology is the advent of standardized approaches -- consider the emergence of automated configurations scripts and containerization as cloud computing grew to industrial scale, or how enterprise SaaS coalesced around configurable business processes that came ready-to-use. When considering how enterprises should harness AI, my expectation is that the full impact will only come after the adoption of a new wave of standardization in how long-established enterprise processes are carried out -- standardization that, like Coupa's proposed supplier collaboration platform, cuts across enterprise boundaries to establish common data models to enable more powerful automations. It's time to start keeping an eye out for these nascent standards as they begin to emerge, but in the meantime maintain careful oversight of AI-powered automation within every organization to ensure that it isn't creating wasted duplication of effort instead of meaningful business outcomes.
[4]
Enterprise hits and misses - execs like gen AI more than workers - but why? And: retailers put tech to the Cyber Monday omni-test
Lead story - Can Gen AI make a useful dent in the unstructured data problem? And why are execs and workers divided on AI? Generative AI enthusiasm is hardly universal. Some colleagues swear by productivity increases, while others are indifferent. But it's even more interesting when patterns of gen AI opinions emerge. Cath addresses this in C-Suite enthusiasm for generative AI is often about survival, not long term value: It has been clear for some time that many employees are not quite as enamoured of the AI phenomenon as their senior leaders. For instance, while one in 10 US workers may use generative AI technology, such as ChatGPT, at least once a week, seven out of 10 never use it at all, according to a recent Gallup report. The figures have changed little over the last year. I suspect usage levels will rise as more so-called "co-pilots" are adopted, whether employees initially love the tools or get them force-fed (ugh). But it's no surprise that results would be mixed on the ground, as tools are put to the enterprise test. Some junior level employees may find valuable corporate knowledge in these tools. Some employees write fast and easily on their own, some don't - and will gladly have a writing assistant (same with coding). Some haven't had the chance to experiment with enterprise-approved tools that can, for example, summarize complex documents, or answer self-service questions. But even more interesting to me is the gap Cath noted between board and C-level enthusiasm: A study by Deloitte revealed that only 14% of board members talk about the technology at every meeting. Some 2.5% discuss it twice a year, while for 16% it is an annual topic of conversation. A further 45% indicate it has never entered the order of business at all. So, what is going on here? Why the apparent disconnect between senior executives and the rest of the organization? One potential answer that makes sense: it takes time to properly integrate such tools into workflows in specialized ways. Meanwhile, getting role-based value out of more generic/off the shelf gen AI bots presents obstacles - even if you are somehow doing it securely. Cath quotes Emily Rose McRae, a Senior Director and Analyst of Gartner's Future of Work and Workforce Transformation practice,: The disconnect I see is between leadership and average rank-and-file employees and managers, who are asked to deliver on expectations. Usage needs to be embedded in workflow, or people need to have a clear use case. If they don't have one, they need time to figure out what it looks like. But if they don't have much free time in their schedule, it's harder to get productive with more generic tools [such as ChatGPT]. Another vital point: the C-level execs excited about AI are not necessarily thinking of it in terms of day-to-day employee usage. Cath also quotes Ricardo Madan, Senior Vice President of IT services provider Teksystems Global Services: The C-Suite is less worried about individual non-tech savvy users and more concerned that their software, hardware etc is AI-enabled. What's top of mind is how it can help them save money, be more efficient, get products to market faster, and increase customer satisfaction. Whether the rest of the workforce wants to use it day-to-day, that's a next year or 2026 thing. I can't quarrel with that answer - except to say that there is still a whiff of focusing more on sexy (and not cheap) technology rather than the business problem/goal at hand. That said, I am all for anyone inside an organization who bears down on the pros and cons, rather than watching the unbearable "AGI" hype coming out of Big AI companies determined to pump up valuations, even while doubts are growing on the "scale is all you need" bandwagon (for more on that, read on). Speaking of gen AI pros and cons, I've had some surprises this fall. I've seen approaches that improve explainability beyond what I was expecting, albeit via RAG/context window documents, not the LLMs themselves (watch for my piece on that tomorrow). I've also had some light bulbs go off via the potential of gen AI to reckon with the long-time bugaboo of enterprise workflows: unstructured data. The fusing of structured and unstructured data into new workflows is early days, but it's a good enterprise AI story to watch. For a sense of what I'm talking about, check George's latest, The new world of unstructured data and AI - how to build an unstructured data pipeline for gen AI. Worth noting about George's piece: data problems aren't magically solved, and new approaches to unstructured data may still require things such as GraphRAG, knowledge graphs, and unstructured data platforms, pre-processing and metadata tagging. Diginomica picks - my top stories on diginomica this week Vendor analysis, diginomica style. Here's my three top choices from our vendor coverage: A few more vendor picks, without the quotables: Jon's grab bag - Chris bears down on the wonderfully uplifting topic of the future of news in the AI era: AI and the media - the future of news could be a 'desert', warns UK House of Lords. (I guess if AI runs out of news outlets to train on, it can always train on "synthetic" news data once our actual publications of record are extinct - it can't be much less reality-based than the unhinged Reddit threads already ingested). Mark Samuels has a more upbeat take on AI and the UK economy: UK Minister for AI explains how her Government wants to exploit emerging technology. Finally, Chris explores a disconcerting question in Scaling up AI start-ups - why is there a UK investment gap? Looks like Microsoft wasn't doing something shady with user training data, but it's hardly a surprise some users would leap to conclusions: Something tells me this won't be the last crypto-related whiff of the year: Put this one in the "live life while you can, because the good ol' days when you could ride your hoss around aren't comin' back" bucket:
Share
Share
Copy Link
A comprehensive look at the current state of AI adoption in enterprises, highlighting the disconnect between executive enthusiasm and employee skepticism, challenges in implementation, and potential impacts on automation and data management.
The adoption of generative AI in enterprises is marked by a notable disconnect between executive enthusiasm and employee skepticism. While C-suite executives and board members are increasingly discussing AI implementation, many employees remain hesitant or indifferent 1. A Gallup report reveals that only 10% of US workers use generative AI technologies like ChatGPT weekly, while 70% never use it at all 1.
This disparity in attitudes can be attributed to several factors:
The latest advances in AI have brought a surge in enterprise automation capabilities. Generative AI has made it significantly easier to create task-performing agents, reducing the need for manual coding and enabling even non-developers to build automations 3. This democratization of automation tools presents both opportunities and challenges:
One promising application of AI in enterprises is addressing the challenge of unstructured data management. Generative AI shows potential in fusing structured and unstructured data into new workflows, although this area is still in its early stages 4. Key considerations include:
SAP, a major player in enterprise software, is taking steps to make AI more accessible and practical for its customers:
As enterprises navigate the AI landscape, several challenges and considerations emerge:
In conclusion, while AI presents significant opportunities for enterprise automation and data management, its successful implementation requires careful consideration of integration challenges, employee adoption, and responsible use practices. As the technology evolves, enterprises must balance enthusiasm with practical implementation strategies to realize the full potential of AI in their operations.
Reference
[1]
[2]
A comprehensive look at how AI shaped policy discussions, enterprise strategies, and marketing practices in 2023, highlighting both opportunities and concerns across various sectors.
7 Sources
7 Sources
A comprehensive look at the current state of AI adoption in enterprises, covering early successes, ROI challenges, and the growing importance of edge computing in AI deployments.
4 Sources
4 Sources
Major enterprise software providers like Bloomreach, Sitecore, Celonis, UiPath, and Atlassian are integrating AI capabilities to enhance marketing automation, digital experience platforms, process intelligence, and teamwork tools.
5 Sources
5 Sources
DeepSeek's emergence disrupts the AI market, challenging industry giants and raising questions about AI's future development and societal impact.
3 Sources
3 Sources
AI agents are emerging as the next frontier in artificial intelligence, promising to revolutionize how businesses operate and how technology is developed and utilized. This story explores the current state of AI agents, their potential impact, and the challenges that lie ahead.
4 Sources
4 Sources