Curated by THEOUTPOST
On Sat, 21 Dec, 12:01 AM UTC
7 Sources
[1]
The Enterprise Tech Year - Policy
AI might accelerate digital transformation, solve the climate crisis and lead to an era of peace and prosperity. Except it may also divide, unemploy, pilfer, sicken, fine and maybe even extinct us, or at least our humanity. Some of the top policy concerns have been the erosion of trusted media, the future of jobs, mental health impacts, data scraping, and healthcare. AI vendors are even distracting us with the thought that AI might just kill us all like in the Terminator, so we should just batten down the open source AI hatches to protect civilization from rogue AI, not to mention from the costs and revenue losses of these more practical concerns. The erosion of humanity by a thousand small cuts might be just as bad and far more than an army of killer robots like in The Terminator. And we need better standards and regulations that are going to accelerate the rollout of more trustworthy AI. Maybe it's time to start thinking about ethical debt by weaving AI policies into the AI development process. Also, you might want to throw out all those dystopian novels. Governments rarely have the resources or skills to prevent the worst excesses of a free-market economy, and it is always nigh on impossible to put a genie back in its bottle. So, that leaves a good chunk of the responsibility with the tech industry to ensure it does not end up living out its days like Oppenheimer, campaigning against the destructive force it has created. But it is going to take more than just an open letter appealing for collective action in creating responsible AI. It is going to require concrete action - and now. Why: Scant attention has focused on the ways big tech companies are exploiting AI innovations to push for lighter regulations for themselves and stifle competition in the name of national security and AI supremacy. This has troubling implications for privacy and safeguarding the creative output of artists and frontline workers from abuse and unfair competition at scale. One particularly alarming narrative being promoted in the West and amplified by AI is that we are in an AI arms race and that failing to lower privacy, copyright, and trust barriers will give adversaries a leg up. Also, these innovations need to be locked up and controlled by a small cadre of "benevolent" dictators who know far better how to protect these than regulators, security experts, and the individuals now already suffering from the effects of bad policy. Adopt AI strategically to make your human workers smarter, and the economy wins. But use AI tactically to slash costs and replace humans with machines, and the net result would be zero gain for millions of jobs lost. To any veteran tech industry observer, this is both the critical issue and the point at which it is hard to feel optimistic. This century, survey after survey of incoming technologies that promise to augment human skills have typically found that business leaders' priorities are to slash costs and do more with less, rather than make their organizations smarter. Why: Elaborating on one particular flavor of the newspeak mentioned above, Chris Middleton explores the narrative of more jobs and greater efficiency currently being pushed. If history is any indicator, greater efficiency generally means cutting jobs or taking the humanity from jobs that exist or are created in the process. He urges us to consider the percentage of people in call centers, often in former industrial heartlands where other work is scarce. Such jobs are typically insecure, stressful, repetitive, exploitative, target-driven, and have high churn rates. The silver lining, at least for workers, is that most organizations lack the requisite skills, not just in technical terms, but also in areas with compliance implications, such as security, privacy, copyright, and data protection. For both individuals and organizations, skills like the ability to think critically, move sideways, and be excellent communicators will be at a premium. In seeking to enhance productivity, over-reliance on AI may actually erode it over time. Lonely, disengaged employees aren't likely to bring their best selves to work. They're less likely to collaborate, innovate, or go the extra mile for their organizations. Why: Things look grim those working the new jobs AI will supposedly create. Research finds that AI implementations that focus too much on productivity and not enough on employee well-being run the risk of accelerating worker disengagement and poor mental health trends. Employees who used AI to get the most done felt socially deprived and were more likely to resort to alcohol and suffer from insomnia. Troubling, these findings seem at odds with the potential benefits of freeing up more time for meaningful connections with other humans. There is a realistic possibility of the UK's news environment fracturing irreparably along social, regional, and economic lines within the next 5-10 years. The implications for our society and democracy would be grim. A growing proportion of the population risks becoming increasingly poorly served as the economics of mass market journalism worsen, unreliable online sources proliferate, and 'anchor' institutions like the BBC struggle to ensure their reporting takes account of and reflects the underlying causes of socio-political realignments. Why: A troubling byproduct of human evolution is that bad news tends to be remembered and travel faster than good. This was important when a threat like a predator might mean the end of things while missing out on the ripe apples was OK since there were more of them. But today, social media sites, fly-by-night news organizations, and politicians are gaming this human survival mechanism to generate revenues while sowing dissent and hatred in the process. Advertising-funded platforms that have powerful AI portfolios all stand to benefit (at least financially) in watching the edifice of mainstream media crumble. The UK House of Lords is concerned the period of having "informed citizens with a shared understanding of facts is not inevitable and may not endure". Although this is a grim prospects, as least the human agitators may keep us distracted. We are just product, it seems. Not even that, but lab rats in a live experiment with our careers and livelihoods. And this from a networking site for human professionals! I would describe that as a meta irony, but that's a different AI company... Of course, some argue that if you are really skilled or talented, then AI can't replace you. Perhaps, but in many cases the opposite is true: the more high-profile you are, the more likely it is that your data has been scraped to train a large-language or generative model. The result? Work that seems a lot like yours can be generated at will. Why: The humiliating current reality is that policymakers are twiddling their thumbs as AI vendors hoover up our styles, thought processes, plots, coding logic, and self-help tips being used to train the army of bots that will replace us and displace our humanity. After all none of these things are considered infringement under current copyright and intellectual property frameworks. Multi-trillion dollar companies are suggesting that we are just data points so our feelings and professional pride are irrelevant. As long as somebody's safeguarding these aspects of this technology, we should get an improvement in quality and safety across the board. We just need to understand each other's needs by working with each other. Good ideas are good ideas, but what matters is making sure that everybody is on the same page. Striving for greater alignment will help us all, including our remit for patient safety. Why: Speaking of decisive topics fueling social media dissent, healthcare now seems to be making the rounds along with immigration and DEI, fueled in part by a certain very disturbing incident that recently occurred in the US. In the UK, it's about increasingly long wait times, while US citizens are wondering why the US leads in healthcare expenses but lags in longevity. So why can't AI speed up diagnoses, help researchers find new drugs, and thus be a major force in curing diseases while moving medicine towards more predictive and preventative care? Middleton explores some of the political and regulatory dimensions. The short answer is that proper regulation and standards aid interoperability by establishing common ground and keeping the public relatively safe from corporate overreach and intrusion. But watch out for "guidance rather than legislation" approaches that give unscrupulous vendors a massive grey area in which to act solely on their own behalf. It is easy, in a cynical world, to dismiss the doom merchants. And to observe that fear of the future is a narrative that has passed from generation to generation. As I said above, it is as old as storytelling itself. But I have a suggestion: add one word to the question 'Will AI destroy humanity?' And that word is 'our'. Will it destroy our humanity? As I have written in these pages before, we have been told throughout this century that AI will automate all the boring jobs and free us to be creative. But right now, it is doing the exact opposite. Why: A recurring theme being pushed by large AI vendors is that the rush to build more competent AI will wake up a superintelligence that will put us out of our human misery. It will know we are miserable because of the decisive social media and news stories it is trained on mentioned above. But don't worry, the super smart AI companies, that happen to be neutering their trust and safety teams, will protect us from this big bad AI. They just need a little regulatory help protecting us from the dangers of open source AI that will let the cat out of the bag sooner. Middleton's reframing above is an invitation to focus on the more likely trust and safety issues that are relevant today rather than the dystopian science fiction narratives that may or may not occur in some distant future. Simply knowing that you can stop a fast car when required means you can slow down when you hit a slippery spot or a bend in the road. Today, all big AI companies seem to be lobbying against various AI regulations, yet they seem to lack the mechanisms to slow down when things go wrong for multiple stakeholders. Maybe that's OK for Silicon Valley executives, but it might not be the best approach for companies planning for the long run or execs hoping to leave behind a legacy valued by society. Why: Speaking of dystopian narratives, Dr Rumman Chowdhury, CEO & Co-founder of Humane Intelligence, recommends we make a little more room on our bookshelves would not hurt to read a few more Utopian science fiction novels exploring ways AI could create a better future. Also, we need to consider a socio-technical approach that evaluates AI in the real world rather than assessing safety in a lab. Right-to-repair approaches could also help fix AI apps in the field. "The rapid innovation in AI technology harbors the risk of accumulating ethical debt, a phenomenon with array of reasons ranging from a focused race to market dominance to unintended biases and consequences, a lack of accountability, and the undermining of societal norms and structures." Why: diginomica contributor Neil Raden sadly passed away on May 18, 2024. He was an outspoken advocate for a different kind of AI ethics, arguing that with a few exceptions, most AI policies did not reckon with structural he elaborated on in We're stuck in the AI ethics fishbowl - so how do we get out? Forget about pretending to make meaningful change at Davos. Practical AI policy needs to be integrated into the development process, he wrote in Yes, ethical debt is a problem for AI software development. Compromises in ethical considerations during the development of AI technologies will cost far more to fix later. It took two decades for people to generally agree that the Horizon system may have hallucinated the books, and the victims are still waiting for compensation. If we expect to trust these systems at scale, creating processes to report, investigate, and act on hallucinations must be as simple as filling out a form. Why: Just this week, the UK Post Office Horizon inquiry is coming to an end after inflicting untold misery on thousands of honest postmasters over two decades. This may be the crest of a tsunami of similar incidents of algorithmic decision-making gone rogue, particularly when backed by vested interests and accelerated by new AI technologies. In this personal account, I explore this phenomenon in one relatively low-stakes case of a hallucinating parking enforcement process. AI regulatory proposals tend to focus on high-risk situations. But scaling low-risk hallucinations without easy feedback and control is troubling as well. We will only continue to be assaulted by the adverse impacts of such hallucinations in fines, insurance claims, healthcare, government services, and other domains with no easy process to make it better unless we consider this as a systemic problem.
[2]
The Enterprise Tech Year - Frictionless Enterprise
Just when you thought digital transformation was already getting complicated enough, along came a year of bewildering AI innovation that is forcing a complete rethink of the enterprise roadmap. The core tenets of Frictionless Enterprise, diginomica's framework for digital work and business, still apply, but the landscape in which they operate is much changed. If you're reading an article about modern AI that's more than four months old, you're wasting your time. Why? Wow, it's been quite a year. Based on what Sayan Chakraborty, co-President and tech leader at Workday, told me in a fascinating conversation a couple weeks ago, perhaps it's not even worth looking back over the whole twelve months. But institutions and enterprises take longer to adapt to change, and the world has to keep moving, even if it can't maintain the same pace as AI innovation. So let's press on. Much of the investment in AI in the next few years will be wasted. But just like John Wanamaker's spend on advertising, the difficulty for enterprises will be knowing which investments will turn out to have been in vain. Why? Based on what we've seen with earlier waves of technology innovation such as the web and cloud, people make a lot of missteps in the first few years. I see a particular danger from rushing to automate redundant processes. The roll-out of AI does need to happen at pace, but it also needs careful governance, a willingness to adopt new standardized processes, and thoughtful change management. It seems that, one way or another, a graph layer is becoming an essential ingredient in helping AI make sense of enterprise data. Why? This was my assessment back in March, after a year already of listening to enterprise vendors talking about their AI stacks. That's an age ago in AI terms, but I think it's still relevant as enterprises prepare the ground to make the most of their AI investments. At the time, vendors were talking up LLMs, prompt engineering and trust, but it seemed to me that it was equally important to look at how the data was brought together and contextualized before being presented to LLMs -- and how the results were audited. Without a graph to map the key objects and how they relate to each other, there's very little hope of making sense of all the unstructured data being fed into generative AI -- such as the world's estimated three trillion PDF documents. It's where I suspect enterprises will find many AI investments wasted, which is why, more recently, I roasted Microsoft for failing to put its graph at the core of its AI strategy. Benioff's argument is that most of the industry is bringing exactly the same mindset to AI -- a throwback to the old on-premise days, when customers wasted huge amounts of money trying to figure out how to get functional results from products whose vendors took no responsibility for actually operating what they had built. Why? When Salesforce CEO Marc Benioff popped up on a video link at Workday's conference to promote his vision of AI agents, it suddenly struck me that AI is developing in exactly the same way that cloud computing did. Most enterprises are taking what used to be called an on-premise or private cloud approach, trying to build their own AI -- and most vendors are happy to sell them the tools. But realistically, enterprises will be much better off trusting vendors to build and operate agent platforms for them -- what Benioff is now ambitiously calling a digital labor platform -- so long as they take a trust-but-verify approach and satisfy themselves that the vendor's platform is robust enough. Optimizing for agility and realizing that the promise of digital transformation is just something that will forever become this eternal state of change, I think is really important. Why? Chris Bach, co-founder of Netlify, articulates a key principle of composable architectures, and one that underpins the change-ready nature of Frictionless Enterprise -- or real-time business, as I discovered a team at MIT have independently named this joined-up way of doing business enabled by connected digital technology. Unilever provided an unexpected case study in creating a MACH-compliant, change-ready app for its distributive trades ecosystem, while Coupa gave a glimpse of future potential in its plan to break down sourcing and procurement silos to create a supplier collaboration ecosystem operated by autonomous AI agents. We've achieved that KPI. We've achieved that stock turn, and that's released six million of working capital. That's great. But actually, because we've got so much data now, industry best practice is that it should be at this level, and therefore you've got another four million to go at. So now let's put a success plan together that says, how do we get those extra four million? Why? Cathie Hall, Chief Customer Officer at IFS, gives an example of what I regard as the ideal measure of customer success -- are we helping our customer achieve the best possible business outcomes? Whether you use business value engineering, as IFS does, or a more playbook-based approach such as the one recently launched by Certinia, staying in close touch with the results your customers are experiencing is a key part of Frictionless Enterprise. I would argue, in a digital world, organizations that are using the right AI are actually more human than the organizations that are not using AI. Why? A provocative statement from Brad Anderson, VP Product at Qualtrics, but one that reflects the aspirations of many enterprises looking to infuse AI into a more empathetic and productive customer experience. As Zendesk customer Charlotte Tilbury puts it, "We see AI as an actor within our flow" -- and as some of Intercom's customers are finding, AI often gives more accurate answers than human agents. But the quest for what the CX industry calls personalization at scale means getting on top of your data and processes, as Adobe customer Lenovo found. [Meta] don't want their customers -- and their customers are very afraid of -- going through another turmoil like this. Why? Gideon Pridor, CMO at Workvivo, the nominated beneficiary of Meta's surprise decision to shutter its enterprise collaboration platform, sums up the way many tech buyers must be feeling about the enterprise teamwork landscape at the moment. It's becoming more and more challenging to stitch together an effective Collaborative Canvas for digital teamwork, another core component of Frictionless Enterprise. Unlike Meta, most vendors in the space are expanding the scope of their tools -- a few examples over the past year include digital whiteboard vendor Miro adding composable workflows, content management platform Box introducing a no-code app builder, and messaging platform Slack integrating AI agents. But this doesn't make it any easier for customers to plot a roadmap for their users. Katie Sissons, Senior Programme Manager at Asana customer Beauty Pie, sighs: "There's lots of tools that are becoming hybrid tools, diversifying into different areas of what a tool can do, and that's quite confusing for people." Now your team is able to go forth and achieve things that they were never able to in the past, because the bandwidth has been now freed up by domain-specific agents and AI-powered agents that are able to do some parts of your job a lot easier. Why? These words from Anu Bharadwaj, President of Atlassian, set the scene for a year in which teamwork vendors, including Atlassian, have mostly done a good job of integrating AI capabilities into their offerings. Just as well, because enterprises are still feeling their way towards how best to manage distributed work. I'll give you my thesis on this. I believe economies of scale can be obtained at smaller scales than people think of today... The level of consolidation today is much higher than what is required by considerations of economic efficiency. Why? How will AI change the world? We don't know for sure, but what we can say with some certainty is that technology innovations on a similar scale have historically led to huge social and institutional resets. That's why I was intrigued by these words from Sridhar Vembu, Zoho's visionary CEO, about the potential for enabling a more distributed, localized model of industrial production than the highly concentrated, mass-market structures of the industrial era.
[3]
The Enterprise Tech Year - AI in action
When I wrote last year's AI use case roundup, I was fully expecting we'd be documenting gen AI project results in 2024. I thought we'd separate the AI wins from the failures. There was a bit of that - but far from what I expected. By the time I wrote my mid-year AI project assessment, my tone had shifted: 'How do we achieve trust in a deepfake world? Trust is now at a new premium - in AI and beyond.' That's not to say the AI conversation didn't advance. It moved so fast, I felt compelled to take a concerted shot at agentic overhype in Why are AI agents being hyped to the point of absurdity? Despite my agentic critique, there is plenty of upside for AI agents. On a use case level, however, that will have to wait until 2025. So did we learn nothing in 2024? I think we learned quite a bit. Taking generative AI live at scale isn't easy for most companies, but on the positive side, the lessons for getting AI projects right are piling up. When AI vendors can speak directly to perceptions of risk - and earn AI trust through data transparency - there are usually good projects to be had on the other side. Responsible innovation is really at the core of this offering and of our overall approach to AI. As a former bank holding company, we are held to extremely high standards of oversight, so we are ensuring our legal, privacy, compliance, procurement, and cybersecurity experts are actively engaged in learning how AI changes the landscape for their specialty and are empowered to advise the organization as we move forward in a world where AI is prominent. Why? This quote from Amex GPT is an apt summary of the mood of most enterprises heading into 2025: yes to innovation - but within enterprise constraints. I was also struck by George Lawton's note on use case selection and results: "business drivers such as traveler care, finance, engineering, and the modern workplace to ensure it leads with the desired business outcomes." AI might venture to bolder places over time, but these priorities are where enterprise AI starts. Vendors that push "game changing" or "revolutionary" AI tech but can't speak to risk, industry and customer results will struggle. The use case here is definitely around being able to analyze big volumes of documents quickly and with the minimum of human effort involved - but being confident the system's always going to give you accurate results. Why? Enterprise search is clearly emerging as a top early gen AI use case - which includes, perhaps, a good amount of the customer support use cases, equipping support reps with better/faster documentation. Accuracy rates at the 98 percent level indicate the impact of enterprise AI architectures, utilizing fine-tuned LLMs, RAG, prompt engineering etc. Though a 98 percent accuracy rate isn't enough for certain higher risk use cases, it's more than good enough for many others. Last year, Jessica was connected with in over 2 million sessions. It is also currently offering a 75% containment rate. That means queries that otherwise would have meant a call to the company's contact center, about orders or commissions, don't need to be made - saving significant Belcorp corporate support time and cost. Why? AI's impact is across industries: next stop, retail. In this case, "Jessica," Belcorp's WhatsApp chatbot, is already showing strong results. We can debate how 'intelligent' these current chatbots are, but they are a big step up from their predecessors: "Spanish in Chile is different from Spanish in Peru and Colombia.Jessica can talk in the specific local flavor of the language the consultants use." I want to give my colleagues superpowers. So if they can automate stuff, and their little AI friend can do things for them... If AI can crunch a lot of data, so instead of being in Excel all day, they can just get the insights, and use them for their creativity. So that's the framework. Why? This standout quote from Aldo's CIO is the kind of AI I want to hear about: the kind that supercharges employees, rather than scaring them into burnout to avoid headcount reduction. Also: let's hear more about freeing up human creativity/ingenuity in the workplace - something that is not guaranteed with AI (our own Chris Middleton has scorched these pages with critiques of AI supplanting human creativity and creative work, to our collective detriment). Lloyds Banking Group is working with ServiceNow, Workday and Microsoft in its HR function to help enable this transformation. Martin said that by focusing on the larger technology players, this means that more often than not they come with a similar set of values and with guardrails specific to AI and generative AI built into their platforms. Why? Yes, some enterprises with sophisticated IT operations are tuning their own LLMs, but the enterprise trend is to consume generative AI via established/trusted vendors. I expect that to evolve as companies get a higher comfort level, but even LLM providers like OpenAI or Cohere will be expected to provide enterprise-grade services (and support) - at least through services partners. The addition of AI modelling has enabled BNP Paribas Cardif to model its claims and discover which types of claims require less documentation, making the claim process less stressful for the customer and faster for the business. AI is not the only way the insurer has reduced its reliance on documentation. Why? Note: AI is one aspect of the tech, which includes optical recognition, natural language processing, information extraction, data matching, and a data quality assessment. We also get a clear sense of how AI has made the claims process better. Aschberger's views hit on a key issue: there is such a thing as AI readiness. It's not just a matter of vendors maturing their AI solutions; customers have work to do also. For Aschberger, this means apps platforms, rather than jumping from app to app. Why? The Hästens vision of AI in the service of better customer experiences resonates. As does choosing (and consolidating) on an apps platform, in this case Salesforce. Yes, APIs can cross platforms - but standardization pays an AI dividend. In Hästens case, they have a solution, built on Salesforce, where a new partner can operate their entire store just on their iPhone. That should provide fertile ground for AI. We know the technology works, but we are essentially running what we call a 'clinical utilities' study to prove it delivers all the benefits that we have seen in our very controlled trials in a real-world environment, with all the vagaries of real-world clinics and patients. We're looking closely at performance, the outcomes for the patients, and potentially all the cost savings to the NHS. Why? Healthcare is one of the most potent areas for AI due to the massive data sets involved - and the potential for human impact. But there are an equal amount of pitfalls. This quote from NHS sums it: AI pilots can be necessary, but there are dangers too - AI can behave differently in 'real-world environments.' AI tech doesn't come cheap, especially when it's in generative mode. Therefore, outcomes versus cost is perhaps the crucial AI metric. In other words: some good AI results may not pass the profitability test. In my view, we are moving into a generative AI results phase, but we're not yet at the ROI phase, because costs (and AI pricing) are moving targets. We now have this data foundation. So, for any application or new use case we want to launch, the time to realize the business benefits has shrunk... Why? There is such a thing as "AI readiness." Customers that put effort into data quality/platforms/governance have an edge, not just for gen AI, but for predictive AI, analytics, and flat-out making sense of their business. Sanchez acknowledged that AI/automation projects can stoke internal fears of job loss. DataStax has an instructive way to handle this: address the future of work head-on, and: give employees a flavor for new roles they could take on. Some companies are doing a poor job of communicating what AI means to them. DataStax is going for a different tone. Why? A big part of successful AI is how you talk about AI internally; DataStax was open about how they do it. With growing confidence in automation projects, rotating finance team members to sample new roles sets the right tone: there's a place here for you in these changes; let's find the right fit. Now if we can just balance the "getting started" stories with the edgier ones. As I wrote last spring: "So many AI stories this spring were about productivity pilots. But what about creative app building? What about utilizing big data sources in imaginative new ways? " That's a question we'll be looking to answer in 2025.
[4]
The Enterprise Tech Year - Partner thought leadership
There are no prizes for guessing that artificial intelligence was present in the undertones of many diginomica partners this year. From addressing hype, to the impact on skills, and what it means for legacy tech. But the bigger picture is a little more nuanced than that. Some of the most popular pieces of thought leadership this year cut to the heart of not just AI, but two of the most critical issues for ERP vendors: data, and change management. Are you sitting comfortably? Here's a round-up of the best. Acumatica: Manufacturing hits a turning point - can a different breed of ERP system bring manufacturers closer to their customers? In order to maintain relationships with customers, manufacturers are adapting by increasing their online presence with e-commerce, self-service capabilities, and new business management software. Why the change? Partly the need to meet customers where they are - and significant disruptions in the supply chain. Debbie Baldwin discusses the challenges that come with making these changes - and shares two use case examples of manufacturers who made the change to a modern ERP. ASUG: Are ERP customers ready for extensibility platforms? ASUG's SAP BTP research underscores an emerging trend Marissa Gilbert shines a spotlight on how SAP customers feel about exploring solutions that accommodate new features and build in new functionalities within the Business Technology Platform (BTP). Changes such as these used to be heavily dependent on custom code which brings drawbacks of its own for maintenance and future development. Through the results of a major research study, ASUG takes a deep dive into how well customers understand the elements of SAP's BTP offering. What capabilities are important to customers? What percentage are involving partners and consultants to support their BTP implementation? Research findings also include not just the benefits - but the expectation gaps. This is a thorough and objective presentation of why implementation strategy for making incremental change is crucial to getting it right. Atlassian: Taking the 'meh-ness' out of AI for developers I couldn't put the introduction to this article any better than Andrew Boyagi does: Asking developer teams if AI is helping them be more productive usually results in a response of "meh, it's OK." What?!? AI represents the biggest technological advancement since the internet, and it's meh? According to Atlassian's State of Developer Experience report, two-thirds of engineers report they haven't yet experienced productivity gains from AI. By that statistic, "meh" is possibly an understatement. There's more to it though, and Boyagi expands on the way that AI is being used. For experienced developers, using AI for coding offers minimal benefits - but using it as a strategic collaborator can make a bigger difference. Of course, this requires a change mindset. Three articles into this review and the pattern is already emerging. Celonis: Crawl, walk, run - a look ahead down the path to Process Mining breakthroughs Technological advances, like process mining, often progress gradually over years before reaching breakthrough moments, writes Prof Wil van der Aalst. This can be challenging when organizations and leaders are under pressure to see returns on investment. However, technological advances like process mining, when developed at the right pace, are leading to exciting and transformative developments such as federated process mining, enabling secure cross-organizational collaboration. Process mining using domain-specific application is paving the way for knowledge-driven, core business optimizations in industries like healthcare, manufacturing, and transportation. Prof van der Aalst expands on these opportunities in detail, with some real world examples of the latest developments, as well as innovations in the works. Certinia: What keeps customer success leaders up at night - and how they can get some sleep Customer success isn't just a buzzword - especially for service businesses who are making strides in raising customer satisfaction and retention. But there's no straight answer to what customer success looks like. Kate Alger explores three important questions that customer success leaders should be asking themselves: Data, automation, and digitization are three keywords that form part of the answer - Alger breaks these down into more detail with the reasons why. Confluent: AI-driven change and data management simplicity - key technology trends for 2024 Richard Timperlake takes a close look at what Confluent is seeing from customer behavior in order to map some trends for the future. This article is less about predictions and more focused on joining the dots based on the goals of companies who want to simplify their data management - and which industries are prioritizing data streaming. Some trends will continue into the next year, but have certainly come to the fore during the last 12 months, including this one: Although there's an uptick in the number of industries interested in data streaming -- including the manufacturing, telcos and government -- financial services continue to lead the way. In the world of payments, for example, we expect to see growth amid pressure to provide robust and secure systems that meet ever-tighter regulations. We have one customer currently processing hundreds of billions of dollars' worth of payments every year in an environment that is secure, resilient and well governed through Confluent, and we'll see examples such as these increase further in 2024. IFS: The rise and fall of AI hype - understanding the current landscape of industrial AI Has AI hype begun to calm down? Christian Pedersen looks to IFS research that suggests that while there's still plenty of enthusiasm among business leaders for AI implementation, this is being countered by the reality that a decent chunk of gen AI projects fail to get past the pilot stage. An examination of industrial AI provides several useful points of emphasis: the importance of internal data, technical expertize, and expectation management. Pedersen discusses the approach taken from organizations such as Rolls Royce who are on the path to success, and provides some key strategic directions for companies to adopt. Oracle: Getting started with AI (and generative AI) in finance The implementation of AI is more than just a tech upgrade, argues Keith Causey. His piece explores how AI and generative AI are transforming the finance landscape, empowering CFOs to streamline processes, make data-driven decisions, and drive innovation. Causey provides a tactical approach on the importance of adopting a data-first mindset, empowering employees by investing in training, and outlining some of the areas that are prime opportunities for automation and simplification - while having a strategic plan to reshape finance operations. Planful: CMOs - want to reach peak financial performance all year round? Then partner with Finance! As Planful's CMO, Rowan Tonkin has the project authority to know what he's talking about in this article. We've seen since the pandemic that marketing budgets became a budget factor that could get cut back to the bone. Tonkin speaks directly to marketers here by emphasizing the importance framing discussions and decisions in terms of measurable financial returns or outcomes. This involves translating metrics like click-through rates (CTRs) or marketing-qualified leads (MQLs) into tangible business impacts, such as increased revenue, reduced costs, or improved profit margins. It's about showing how investments in marketing directly contribute to the organization's financial success, making it easier for finance leaders to understand and support marketing initiatives. With tools to centralize spend and measure impact, marketers can move from reactive to proactive planning. As Christine Kelly of Cvent puts it, "By operationalizing our marketing plan, we have set the foundation to measure ROI and prove the value of marketing via data." Learn how collaboration with finance can unlock smarter strategies and stronger outcomes. Pure Storage: The data journey for AI - ingestion to innovation This article simplifies the complex process of building and refining AI models, emphasizing that it's about much more than just training the model. To create a useful AI model, organizations need to carefully manage their data across six key stages: finding it, preparing it, training the model, evaluating the results, deploying the model, and continuously improving it. Each step generates new data, often amplifying the original dataset with duplicates, metadata, and logs. Fred Lherault explains that AI projects aren't one-and-done; they're ongoing, with new data and feedback driving improvements. As a result, managing the ever-growing volume and variety of data is crucial. Organizations need to balance storage demands with sustainability and scalable technologies like as-a-service solutions to ensure long-term success. Sage Intacct: Simple cyber security measures for SMBs to ensure a safer digital landscape Small and midsize businesses (SMBs) face growing cyber threats but don't always have the resources or expertize to address them. Ben Aung discusses some practical, easy-to-implement steps SMBs can take to strengthen their cyber security, from assessing risks and enabling basic defenses like two-factor authentication to fostering a security-aware culture. Again, it's never solely about the technical aspects: Aung also highlights the importance of planning for incidents and leveraging cost-effective tools like cloud security and endpoint detection. This piece offers straightforward advice on building resilience and protecting a business in a proactive way. Salesforce: The future of Enterprise AI isn't about more data - it's about the right data Paul Sullivan picks up the topic of getting stuck in a loop of AI pilot loops - and explores the challenges companies face with data silos, incomplete information, and employee skepticism around AI. He emphasizes the importance of consolidating and cleansing data to ensure AI systems deliver reliable and actionable results. This piece includes practical steps like unifying enterprise data, leveraging real-time insights, and embedding AI into workflows to enhance decision-making, customer engagement, and operational efficiency. Sullivan also highlights the need for trust, data security, and human oversight to ensure successful and ethical AI deployment at scale. No hype - lots of practical advice. Samsara: AI is hastening the demise of legacy telematics systems In the transport and logistics industry, AI has the potential to effect change in much more than the pieces moving through the supply chain. Telematics can be a powerful tool to improve health and safety, but what does that mean for fleet companies with legacy technology? Philip van der Wilt provides use cases from Samsara customers, including food distribution company Sysco GB, which is using AI for live dashcam video footage for managing insurance claims, improve staff skills and road safety. This quote from the Regional Operations Director provides context: Our process for assessing and managing driver behavior was random. Reviewing dashcam footage would be a whole day's work, and as there's only so much a team manager can review, many drivers were missed. It was impossible to tell who was demonstrating good behavior and who wasn't, and even harder to ensure the safety of our drivers at all times across the entire fleet. ServiceMax: Six ways to boost the productivity of your field service technicians Skills and productivity can be a topic of friction in more ways than one, when mentioned in the context of AI. Joe Kenny addresses whether AI can improve productivity - but not in the sense of time and motion studies. Identifying the task that field service technicians hate, and looking at ways to address them can make a big difference to employee motivation and performance. And this requires data. Keeping in mind the fact that technician hiring and retention is a significant challenge for organizations, there is no sense in advocating for a tool that could disengage staff and break trust. Kenny provides siz ways that service leaders are making a difference for frontline field workers. ServiceNow: How can retailers benefit from AI this Christmas? AI is revolutionizing retail by enhancing customer and employee experiences, aiming to resolve problems faster and streamline workflows. Damian Stirrett points to Asda's partnership with ServiceNow: "The retailer faced the challenge of separating more than 2,500 systems from former owner Walmart (which still owns a 10% stake in Asda), while also keeping business running. The change, which Asda dubbed Project Future, saw ServiceNow put 'at the heart' of everything Asda does, with a service-oriented architecture for all of Asda's employees. Building a one-stop-shop for employees to deal with issues ranging from IT to HR in one place, the move also enabled new workflows in Customer Service Management. This freed up employee time to focus on delivering for customers. The prospect of agentic AI offers further potential - it'll be interesting to revisit this use case next year to see how things are progressing. UiPath: How to double your AI developers Luke Palmara explores how citizen development programs address the growing talent shortage in tech by empowering non-technical employees to create AI and automation solutions. It highlights real-world examples, like Amazon, where citizen developers drive innovation, save resources, and boost ROI on AI initiatives. This article digs into ways that democratizing technology fosters collaboration, accelerates digital transformation, and ensures businesses stay competitive. He notes: The process of training non-coders and business users on critical tech not only empowers them to create their own solutions but also unleashes the untapped potential of their diverse range of skills (e.g., customer service, user experience, marketing, etc.). This process has become even more vital in the age of AI, as companies need every worker to understand the capabilities of the technology and how to deploy it to foster collaboration and unlock new sources of innovation. Workday: Better together - why insurance firms must modernize their Finance and HR applications Nicole Carrillo explores the urgent need for insurers to modernize their core enterprise platforms, transitioning from outdated systems to unified, cloud-based solutions. As noted in the article, no company decides to do this for fun. However, there is an urgent imperative - research highlights how legacy systems create inefficiencies, siloed data, and compliance challenges. As Carrillo notes: The emergence of new technologies like generative AI and LLMs (Large Language Models), the maturity of cloud computing, and modern data management methods, makes it a huge, missed opportunity to remain on old systems rather than tackle moving to a modern platform.The threats this poses on insurance organizations include the gaps developed in data security, system availability, and compliance processes. Accelerating insurers' digital transformation not only unites their data, but also establishes trust within their organization that their greatest asset - data - is secure. This piece includes real-world examples like CNA and Unum, and unpacks how embracing modern technology unlocks agility, efficiency, and strategic decision-making in the insurance sector. Zoho: Beyond the data deluge - how companies are navigating self-service analytics Data has been an underlying theme of decision-making, training and implementing AI, financials and change management. One of Zoho's articles this year took a deep dive into the proliferation of data and how to manage it, using two real-world use cases. One customer had over 1,300 client systems connected to its data center and wanted to get the maximum insights available, as all departments stood to benefit in one way or another. But adoption, dashboard optimization and the fundamental basics of who could search what, and how, were a hurdle that needed to be overcome. Find out how they tackled it, and the lessons learned. This is the ultimate starting point for any organization looking to get to grips with data - whether it's for informative purposes, or for development on the road to AI.
[5]
The Enterprise Tech Year - Marketing
The central theme in marketing this year was how to implement generative AI into marketing and sales technology. Yes, there were other conversations, but they all seemed to come back to the use of AI. It took a better part of the year, but we finally started to see some solid examples of AI for more than writing content. Everything, whether AI-related or not, all came back to creating better customer experiences across the buyer journey. It's less about shiny object syndrome and more about how we use content, including audio and video, and how we measure performance, all in the service of the customer. And, to be honest, it feels like it's about time. Here are my picks of the best diginomica content for 2024. There is value in leveraging generative AI for content, and most companies are cutting their teeth on this technology by using it for content creation. But are there better uses that will help marketers in more important ways. Why? We are well into generative AI, and most discussions are still around using it to write content. To be fair, that capability has improved dramatically. But we are also finally starting to see the things I was looking for - using generative AI to take on more of the operational tasks of marketers (and sales). Instead of relying on pre-built reports, why can't marketers ask questions and have AI do the hard work of analyzing and pulling everything together? And then enable the marketer to select which insights to act on and help create the framework of a campaign to make that happen. Sure, there's content creation in there, but it's only part of what happens, and it's not the first thing. Why? I was starting to believe we might not see some exciting use cases for generative AI in marketing. Until I learned what Bloomreach was building. This was the first time I saw a copilot (Bloomreach calls them 'pilots') take on the bulk of the work of setting up a campaign end to end. The marketer tells the AI what they want, and the AI goes away and sets everything up, leveraging data about customers, from past campaigns, and more. The marketer can review, modify, and press publish. It's every marketer's dream because it allows them to focus on the important work - understanding customers and creating campaigns and experiences that work. Custom tool handling is an opportunity for software vendors, Penn said. He said if a martech vendor can expose its services to the models, then the models can intuit when to use that tool. For example, a model can build an API to connect to Demandbase to pull in intent data. Or it could connect to Zoominfo to pull in a contact's information. Why? Most martech vendors are still in the copilot phase of generative AI, and that's okay. But things are changing fast. Agents that do the bulk of a task end to end are gaining in popularity, especially the ones that can connect to other agents to perform a series of tasks. And then there's custom tooling, which we haven't heard much about from marketing vendors but which holds a huge opportunity because it builds on the popular LLMs everyone uses. Wait five more minutes - I'm sure a new way to leverage generative AI will appear and capture our attention. We keep talking about how B2B buying behaviors have fundamentally changed. We hear everyone talking about moving from MQL-centric marketing to signal-based marketing, the importance of looking at different intent signals and what's the propensity to engage further, the propensity to buy, the propensity to renew, looking at that using AI to drive forecasting around stuff like that. But I just don't think that we are addressing it properly if we don't have content intelligence baked into our tech stack and baked into our overall go-to-market strategy. Why? We keep saying that content is king, but do we truly monitor and track its performance in relation to the customer experience? Too often, marketers create and publish content but don't know how it impacts how customers engage with the brand or how that content drives greater engagement and conversion. And so they keep creating more content. Content intelligence empowers marketers to understand the value of their content better. When they do that, they can prove that their content is king. Perhaps the most overlooked reality of effective content capabilities is that there is no single right answer for what they should look like. Tech stacks and organizational structures alike are most effective when they are defined by the unique requirements and priorities of a given business or organization. And in all cases, effective internal communication and content operations play an instrumental role. Why? You can manage your content effectively if you have the best content management system. Not really. The best content management system depends on your business's unique requirements, which means you have to spend time understanding those requirements. Also critical are governance policies. I think this piece aligns with the one before -- you have to treat our content as a critical business asset, and until you do, your experiences won't be great. I've worked in enterprise software for a long time, and understanding what the experience a person is having as they move through the products is, is a constant learning curve, especially for something like Salesforce, where there's so many different kinds of users using this in different ways. So finding the common things that really tie it together has been a good and fun challenge of this role in the last 18 months. Why? Every piece of software that's been around for any amount of time goes through design changes. This piece provides insights into how to best think about updating the user experience using Salesforce as an example. There is a lot to think about, including visual elements, user experience, and not breaking the customization that existing customers have made. It may not be easy, but it is possible. I think another thing that's going to happen is basically the floor of the quality of videos is going to go up. It's going to be easier for the average person to make a video that they're proud of. I think it's going to take cultural time for people to get used to the AI avatar thing. I think that's going to take longer to actually proliferate, but I think that the tools to aid in production are going to happen very, very rapidly. Why? AI is not only impacting text-based content; it's also driving change in video and audio production, democratizing what was once the purview of graphic designers and video production companies. We're also seeing a rise in the use of avatars and image and video generators. The technology is cool, and the opportunities are many. Let's just be sure not to lose our humanity in the process of all this AI. Attribution isn't as easy as saying that for every dollar you spend on a specific channel, x number of leads can be obtained. That's because it's rare that a person sees an ad in one channel and then automatically converts. And it's even more rare that they've only seen your brand in a single channel. There is no straight line to conversion. Why? Attribution has become a dirty word in marketing because determining it is challenging at the best of times. Marketing impact modeling is a different way to determine what marketing tactics are helping drive conversions. You can look at each channel involved and leverage predictive technology to learn how each channel supports the journey. It doesn't require a straight-line journey, which is good because that path to purchase doesn't exist. According to Salesloft's Chief Product Officer, Ellie Fields, the traditional funnel is broken, and it's hard to argue with her. Buyers don't proceed in the nice orderly fashion that's typically defined (awareness, consideration, decision, customer - or some form of that funnel/journey). Instead, they are all over the place. That's even more true for B2B buying teams. Fields argues that Marketing and Sales have operated in silos, and it has been a disservice to the buyer because they are basically dropped from one team to the other with little to no explanation of why, resulting in the buyer often feeling like they are starting from the beginning when they engage Sales. Why? I share this one because it's a good example of how marketing and sales technology are coming together to support the buyer journey. Instead of each group doing their own thing, they can see across the tech stack and, as a result, have access to the customer data needed to create the right experiences -- whether on the marketing side or when sales reach out for a conversation.
[6]
The yellow brick road to agentic AI - SiliconANGLE
The road to agentic artificial intelligence will be paved with stepping stones that progressively build on each other. Our research suggests that agentic AI will not suddenly appear without a strong data foundation built on: 1) cloud-like scalability; 2) a unified metadata model; 3) data mesh organizing principles; 4) harmonized data and business process logic; and an orchestration framework that incorporates governance, security and observability. Though some believe the year of agentic AI will come to fruition in 2025, we predict bringing these capabilities together is a decade-long journey and there's no shortcut on the yellow brick road to realizing agentic automation. In this Breaking Analysis, we piece together previous research and point to a dramatic change in the enterprise software stack. We'll explain how we see the journey playing out, the critical pieces of the emerging enterprise software architecture and the high-value layers of real estate in that system that are still taking shape. Over the course of the past two years, we've been laying the groundwork for understanding the impact AI has on the enterprise software stack. We've tried to cut through the agent washing and highlight the prerequisites for agentic AI success. We've discussed the progression of the data stack beyond separating compute from storage. And we've emphasized the importance of separating compute from data, underscored by open table formats such as Iceberg and its potential unification with Delta. We've also discussed the need to unify metadata and the shift of control from the database to the governance layer and how that piece of the stack is opening up (think Unity and Polaris). This all builds on the work done early on by Zhamak Dehghani with data mesh as an organizational construct for breaking data silos. Earlier this year our research focused on configurable business processes in the form of metadata, using the Salesforce Inc. data cloud as an example. And more recently, harmonizing not only data but also shared business logic with examples such as Celonis SE, Palantir Technologies Inc. and RelationalAI Inc. AI is a catalyst and is disrupting an enterprise software stack that is five decades old. Many believe this AI era is the most profound we've ever seen in tech. We agree and liken it to mobile's role in driving on-premises workloads to the cloud and disrupting information technology. But we see this as even more impactful. But for AI agents to work we have to reinvent the software stack and break down 50 years of silo building. The emergence of data lakehouses is not the answer as they are just a bigger siloed asset. Rather, software as a service as we know it will be reimagined. Two prominent chief executives agree. At Amazon Web Services Inc.'s recent AWS re:Invent conference, we sat down with Amazon.com Inc. CEO Andy Jassy. Here's what he had to say about the future of SaaS: I'll say supply chain is another area that we think we can be very effective and we have a lot of experience just like customer service there. But I also believe that AI is going to open up all sorts of new SaaS opportunities and softwares and service opportunities. I've been saying this for a long time, I've told you guys this too, which is that I think every single SaaS company and application that we know of will be reinvented with what's available in the cloud. And I think that's doubly true when you think about what AI allows. And Microsoft Corp. CEO Satya Nadella on the BG2 Pod recently went into some depth that we're going to unpack. Here's what he said: The notion that business applications exist. That's probably where they'll all collapse, right? In the agent era, because if you think about it, right, they are essentially CRUD databases with a bunch of business logic. The business logic is all going to these agents and these agents are going to be multi repo CRUD, right? So they're not going to discriminate between what the backend is. They're going to update multiple databases and all the logic will be in the AI tier, so to speak. And once the AI tier becomes the place where all the logic is, then people will start replacing the backend. Jassy sees cloud plus AI as the transformative catalyst and Nadella talks about multi-repo CRUD databases - which stands for Create, Read, Update and Delete. With the logic in the AI tier, when he talks about replacing the backend, Nadella, like Jassy, envisions a sea change in SaaS. What Nadella is talking about is really a 10-year vision without mentioning any intermediate steps on the way. Someday, we may have the technology to put all the deterministic rules and logic that constitute an application today into a nondeterministic neural network, in the form of an agent. We do not have that technology today. And so, there are many steps between where we are today and getting to the vision Nadella talked about, and we're going to go through those. What he's saying, basically, is that we can "kneecap" every SaaS app and turn it into just its database schema. But if we do that, we'll have another Tower of Babel, with a bunch of agents that don't know how to talk to each other -- even though the vision is that the agents can talk across the databases. Let's pick up from where we are today: the current modern data platform. A couple of years ago we started talking about the sixth data platform beyond the five modern existing data platforms typified by Snowflake and Databricks. Above is data from Enterprise Technology Research, which shows Net Score or spending momentum on the vertical axis and Overlap or penetration into a data set of more than 1,700 IT decision makers on the horizontal plane. We're plotting Snowflake Inc. and Databricks Inc. along with Google LLC, AWS and Microsoft. We also show Oracle Corp. for context as the legacy database king. The red dotted line at 40% indicates a highly elevated Net Score. We annotate Microsoft and Oracle because they are in the data game but they're not considered representations of the modern data stack per se. But we don't want to debate that today. We show this because these are the players that are squarely in the mix of this transition. They have a lot to gain and much at risk. As shown below, the cloud data platforms are the starting point on our stroll down the yellow brick road. As reported previously, there is a shift underway from control at the database layer toward the governance catalog, shown above as the operational metadata. This shift begins to lay the foundation for a new application platform. Horizon from Snowflake and Polaris, its open source catalog, Unity from Databricks, and other established governance platforms such as Informatica, Collibra and Alation are all in play. Firms are thinking differently about organizing around data, employing concepts like data mesh and treating data as a product. They are leveraging information about people, places and things in a distributed organization. The real excitement lies in the movement toward incorporating business processes and harmonizing both data and processes, enabling swarms of agents to work together toward a desired outcome. The original cloud data platform -- Snowflake -- was among the first to separate compute from storage. Over time, the industry has recognized the need to separate compute from data. With the rise of open table formats or OTFs, multiple compute engines can access the same data. This requires separate metadata, including technical and operational details such as lineage. Such metadata formed the foundation of data pipelines and created data products defining concepts such as "customer," "product" or "lead." However, these constructs remain static entities. To track a customer's journey from engagement to prospect, then to lead, and ultimately to conversion, for example, current methods simulate the underlying business process using business process metadata. This provides a static representation that can be configured per customer, but only to a limited extent. Salesforce's Data Cloud stands out for customer data, representing Customer 360 and the entire customer journey in a harmonized way that supports analytics and applications. Instead of merely sharing tables, platforms share the concept of a customer and their journey. The next challenge is moving beyond a static metadata picture to sharing the business process logic itself across applications, enabling ultimate flexibility. With harmonized process logic, agents can communicate across the entire Customer 360 and the customer journey, using a common language. Without this, agents must contend with scattered tables. An organization such as JP Morgan Chase & Co. might have 6,000 tables referring to "customer," creating a modern Tower of Babel that does not function effectively. This relates to the notion that the future may consist of numerous SaaS applications reduced to schemas, with thousands of tables referencing "customer" remaining disconnected. No current technology allows AI agents to harmonize such complexity independently. Symbolic harmonization is needed so agents can speak a unified language. This harmonized logic is crucial for achieving true automation. Perhaps in the distant future, it will be possible to discard this logic layer, but for now it remains out of reach. Let's take a look at how the enterprise software stack is changing, as highlighted below. The concept shown above was introduced earlier this year. At the bottom of the stack, AWS represents the underlying cloud infrastructure, setting the stage for others like Google, Microsoft and Oracle (with OCI) to join in. Snowflake popularized the separation of compute from storage, essentially providing infinite capacity as a cloud data warehouse. Databricks then focused on data science and data pipelines, influencing the shift toward open table formats such as Iceberg. Databricks acquired Tabular and is now working to unify Delta and Iceberg. Amazon's announcements at re:Invent around S3 tables and open table formats further underscore this trend, aiming for read/write capabilities and governance integration. The key point, highlighted on the left side of the referenced chart, is the shift of control from the database management system to the governance layer. This governance layer is increasingly open source, elevating the importance of what can be termed the "green layer." This includes the semantic layer, which harmonizes data. However, as noted in previous research, the process now goes beyond data -- it includes business logic and business processes. This is where the new source of competitive value emerges. Salesforce, Palantir, Celonis and others are participating in this evolving ecosystem, creating a new competitive environment. As previously emphasized, the data platform landscape was once dominated by the DBMS and its control of storage. The opening of the table format meant that the DBMS could no longer define the state of the tables if other engines were going to read and write to them. Control shifted to the operational catalog. Databricks' Unity catalog, introduced in 2023, appears to be a strong contender here. Although there have been statements of direction around open-sourcing Unity that are not fully realized yet, Databricks executes rapidly, and the unification of Iceberg and Delta is now expected sooner than we initially anticipated - perhaps as early as Q1 2025. Snowflake's Horizon catalog, the new source of truth for its ecosystem, still runs atop the Snowflake engine but synchronizes with Polaris. This allows governance policies set in Horizon to be applied to the open Iceberg ecosystem. The next layer up involves adding data semantics for concepts like customers, products, leads, and campaigns -- the first part of the semantic layer. The far more challenging aspect is harmonizing processes, which requires changes to databases that have been decades in the making. Achieving this will pave the way for agents that can operate effectively in this new environment. Let's paint a picture of what this stack looks like at a steady state. The evolution from on-premises environments to the cloud began with infrastructure as a service, which reduced much of the heavy lifting associated with infrastructure management. This progression continued with platform as a service and SaaS, where more infrastructure activities -- what Amazon calls "undifferentiated heavy lifting" -- became managed services. However, the green layer at the top of the stack is where new value is emerging. Three layers are shown above in the green: the digital representation of a business, a network of agents, and a new layer of analytics guided by top-down organizational goals. This structure enables the interpretation of goals and adjustments based on market changes or human guidance. The result is bottom-up outcomes driven by agents collaborating with each other and with humans, while taking action in a governed manner. This new set of layers integrates the silos of applications and data built over the last 50 years. These silos can now be abstracted and turned into what Nadella described as "sediment." Nadella's viewpoint focuses on the data layer, while this perspective emphasizes the application logic layer that hosts the agents. There is a clear business imperative behind this shift. We believe companies will differentiate themselves by aligning end-to-end operations with a unified set of plans -- from three-year strategic assumptions about demand to real-time, minute-by-minute decisions, such as how to pick, pack and ship individual orders to meet long-term goals. The function of management has always involved planning and resource allocation across various timescales and geographies, but previously there was no software capable of executing on these plans seamlessly across every time horizon. This end-to-end integration requires a harmonized digital representation of the business as a foundation. With this, analytics can orchestrate and align agent activity that occurs not only within silos but also in collaboration with humans. Management thus becomes increasingly integrated into a software system -- an evergreen capital project that is never truly finished. Instead of relying solely on tacit knowledge stored in the minds of a management team, this knowledge is gradually converted into an ever more integrated software product. Let's zoom in a bit on some of the high-value pieces of the stack that we've highlighted previously but are worth reviewing. The evolution we envision connects backend systems -- both analytic and operational -- to extract business logic previously trapped inside applications and make it more accessible in real time. Rather than relying solely on analytic systems that produce historical snapshots, this approach aims to enable continuous decision making and automating workflows. Two layers stand out, highlighted in red: At the top, organizational goals guide the process. A high-level goal, such as gaining market share, may set constraints around margins or pricing and specify revenue targets and tactics to achieve an outcome. Agents can understand these goals and execute bottom-up actions within defined guidelines. Working together with other agents and with human input, these "worker bee agents" adjust to changes in the market and adhere to top-down frameworks. This is critical because the metric tree -- representing business goals from forward-looking strategies at the top to more technical and operational states at the bottom -- is not just a set of dashboards or historical reports. Instead, these metrics function like dials on a management system. Relationships between them must be learned over time. By applying predictive and process-centric platforms, organizations can conduct experiments, observe outcomes and refine their understanding of how market demand shaping or other actions influence results. When integrated with models and training cycles, agents learn from both human interventions and observed outcomes. If an agent encounters an exception it cannot handle, a human can step in to guide the resolution. Over time, the agent learns from these "teachable moments" and can handle similar situations independently. Likewise, when agents attempt to shape demand and measure the effects on metrics, they gain deeper insights that improve their future performance. This learning framework -- harmonized data, business processes and metric-driven goals -- offers a scarce and highly valuable layer in the enterprise stack. Though there may be many agents, there will be relatively few such business process platforms within any given organization. Ultimately, as agents learn from both direct human intervention and the outcomes of their actions, they improve continuously, driving innovation and operational efficiency. There are many participants, but below are some of the players that we're tracking in this new world and where we see their value-add in the stack. The bottom layer includes data platform providers such as Snowflake and Databricks, which are leading efforts to represent core business entities. Other companies like Relational AI, Celonis and EnterpriseWeb LLC are building cross-silo capabilities, often referred to as the business process layer. Above that layer, organizations such as Palantir, Oracle and Salesforce are harmonizing business processes within their own ecosystems. Moving further up, an agentic orchestration layer is emerging, featuring companies like Google, Microsoft and UiPath Inc. It is widely anticipated that AWS will also play a significant role in this evolving stack, based on recent announcements and developments. A key point is that it is far more challenging to move from representing business entities -- people, places and things -- to defining and aligning cross-silo business processes. The industry has spent decades building the data and application logic technologies needed to fuse these elements together. Relational AI, for example, uses a relational knowledge graph, allowing organizations to declaratively define application logic, similar to expressing requirements in SQL. This dramatically simplifies the process of articulating logic. Celonis provides business process building blocks so that customers can conduct process mining and configuration with minimal coding. Palantir excels at connecting deeply into core transactional systems but requires more procedural coding, as it does not supply out-of-the-box application templates. Salesforce, with its Data Cloud, offers comprehensive coverage of the entire customer 360 domain, including customer journeys and touch points, expressed through configurable business logic that fits its model. UiPath is in a position to automate processes, including those where APIs may not be available. These approaches highlight the complexity of harmonizing business logic across multiple platforms. Building a metrics tree of business outcomes requires a consistent representation of enterprise processes. This goes beyond simply connecting schemas from various applications. The metrics tree represents the "physics" of a business -- its behavior and logic -- linking high-level goals such as gaining market share to more granular operational metrics. Without harmonizing the underlying application logic, it is difficult to create this cohesive representation of business outcomes. In short, though companies have made substantial progress in harmonizing data at scale, the next frontier involves fully integrating both data and business processes into a unified stack. Achieving this will unlock the potential of agentic orchestration and deliver a new level of automation, insight, and adaptability for the enterprise. The massive opportunity ahead was explained by Erik Brynjolfsson in the graphic below, annotated by George Gilbert. The frenzy around enterprise AI largely relates to boosting productivity. On the surface, this suggests achieving similar or greater outcomes with fewer employees. Industry observers, such as David Floyer, often discuss realizing the same results with a small fraction of the workforce. The key question is how this will play out within the enterprise. Erik Brynjolfsson's perspective, depicted in a power law curve above, is useful here. Historically, packaged applications addressed certain high-volume, repeatable business functions -- often in the back office and other well-defined domains. Custom modifications and applications were then introduced to handle proprietary processes, specialized data, vertical industry tasks or unique organizational needs. These encompassed another sizable portion of automation. Yet beyond these implementations lies a very long tail of workflows that remain unautomated. This long tail represents the space where AI agents can deliver an order-of-magnitude increase in productivity. Rather than relying solely on precoded, deterministic logic -- as seen in traditional packaged software -- the next generation of agents will learn dynamically. They will adapt to unanticipated scenarios and exceptions by observing outcomes, incorporating human feedback and refining their responses over time. In other words, at the left end of the curve, tasks were automated through deterministic logic because they were well-understood and repetitive. Further along the tail, there are countless less-common, more nuanced workflows that cannot be fully predefined. By deploying AI agents that learn and improve continuously, enterprises can gradually tame these unstructured, long-tail processes. However, current technology cannot simply discard existing deterministic rules and rely entirely on a multitude of autonomous agents. Doing so would result in a chaotic environment -- effectively back to a Tower of Babel -- where agents struggle to understand their roles and responsibilities. The path forward involves carefully combining traditional, deterministic systems with learning agents, enabling them to handle both well-understood tasks and emerging, unpredictable scenarios. Over time, as agents learn from outcomes and human intervention, more workflows can be automated, significantly increasing overall productivity. Let's end with a look ahead to 2025 and beyond. As previously discussed, there is likely to be significant "agent washing" in 2025. Many will market single agents or lightweight solutions as agentic systems, but closer inspection will reveal that the journey is only beginning. Some claim agentic AI capabilities will be widespread next year, yet substantial work remains. This is not a short-term trend; it is expected to be a multiyear process. Though efforts may start to take shape in earnest in 2025, the real impact may take two to 10 years to fully unfold. One major concern is that vendor-specific agents will emerge within application silos, reinforcing fragmentation and risking yet another unfulfilled promise by the tech industry. Many previous initiatives -- such as Customer 365, certain data warehousing efforts and the big-data craze -- failed to meet lofty expectations. Although the cloud has largely delivered on performance, a number of data-related promises have been broken or only partially realized. The danger is that the industry may simply bolt agents onto existing legacy architectures, effectively "paving the cow paths" rather than delivering a meaningful transformation. The opportunity is to reinvent the application stack rather than perpetuate the status quo. Though leaders such as Jassy and Nadella have acknowledged the need for change, even they concede there are challenges and uncertainties in how this will develop. The vision of an agentic future that delivers a 10x productivity gain hinges on harmonizing end-to-end business processes, ensuring that agents and humans collaborate effectively and share a common understanding. Different vendors are placing varied bets. Major cloud providers are setting forth their strategies, data platforms like Snowflake and Databricks are staking their positions, and a diverse group of application players -- including ServiceNow Inc., Salesforce, Oracle and SAP SE -- are shaping their own approaches. Meanwhile, a flood of investment is pouring into agent startups. The question is whether these emerging players will help integrate the stack or create new silos. Many of these agents will need access to the business logic currently locked inside existing applications. Without harmonized logic and accessible platforms, these agents could struggle to deliver meaningful value, forcing them to pay a premium to tap into that logic. This underscores the importance of not skipping crucial steps. The path forward involves creating new infrastructure layers, incorporating genuine harmonization and avoiding the trap of superficial bolt-ons. Although realizing this vision will take time and persistence, focusing on the pieces that do not yet exist -- and making them real -- can substantially increase the likelihood of achieving the productivity gains envisioned by this agentic era. What do you think? How are you thinking about agents in your organization? What steps are you taking to prepare?
[7]
Workday co-President Sayan Chakraborty discusses the value and risks of AI agents in the enterprise
AI agents have become a big talking point for enterprise technology vendors in recent months, but is there substance behind the hype? I had the opportunity recently to catch up with Sayan Chakraborty, co-President of Workday, who leads the company's product and technology organization. As someone who started his career as literally a rocket scientist at NASA, he's not only on top of everything that's going on under the hood at Workday, but is also remarkably level-headed about both the potential and the shortcomings of current technology. Even so, he admits the current pace of change is unnerving: You can't rest... I think for all of us, if you're reading an article about modern AI that's more than four months old, you're wasting your time. I'm not young anymore, and I've been through the Internet boom, I've been through dot-com, I've been through mobile, I've been through cloud. This is faster than all of those. It's something new every day, every week, and that's daunting. I think it's daunting, as a technologist, to keep up with the state of the art. It helps that the core principles are well understood, but the extrapolation is really interesting to kind of see how close are we getting to... where it can operate fairly autonomously. One example, which is at the heart of the latest generation of AI agents, is how quickly the technology around Large Language Models (LLMs) has evolved beyond simply interpreting and creating text or images to allow them to actually plan and orchestrate tasks. He goes on: If you had asked me six months ago, could LLMs do task orchestration? I would have told you, 'Maybe.' Now I will tell you, 'Almost certainly, yes -- at scale.' This is an impressive new twist in the technology's versatility, and marks a step change in capability compared to the chatbots and process automations of the past. He adds: I think that is what is different than past automation approaches. This is leveraging the neural network, the Large Language Model, for task completion and even task planning. This level of versatility is comparable to earlier technologies such as the steam engine or electric motors, which explains why vendors are getting excited. He goes on: There are a few cases in human history where we get these broad, horizontal technologies that are pretty good at lots of different things. This is one of those. This is a general-purpose engine. It happens to be very, very good at general-purpose problem solving, all different kinds of problems. Obviously, LLMs were designed to solve one particular problem, which was foreign language translation -- human language to human language translation -- in the same way that steam engines were originally designed to pump water out of coal mines in England. And then, of course, they were used everywhere. That's what we're seeing with the LLMs. What happens when you take this general-purpose problem solver and apply it to completing tasks, and then what happens when you then apply it to planning out task orchestration? That's really what agentic AI is all about. Agents really do, therefore, represent a significant step up in capability from the AI assistants and co-pilots of just a few months ago. While these still have an important role to play, they're not delivering huge productivity gains. He explains: I think what we generally see is an uplift that is in the low double-digit percentages of improvement -- and then there's lots of cases where that content generation, you end up spending as much time reviewing the content produced by the AI as you would have in writing the content the first place. So you're shifting the work from generation to review, but you may not be seeing dramatic savings from all of these content generation use cases that were really the primary focus for a lot of people last year. In addition, he says, the productivity improvements from co-pilots are "unevenly dispersed," particularly when you take into account the operating and development costs consumed by the technology. He explains: The less knowledgeable the user is, the more the co-pilot helps them, up to the point where experts actually find the co-pilot to be annoying, to be quite frank. So it's really about uplifting the middle and bottom quartile of worker. Or [for] assisting a worker who doesn't need to be an expert in pay, but may have a question once a month or twice a year, and so doesn't want to become an expert, it's quite useful. What has been missing is a dramatic productivity gain from what is an expensive technology to operate. As you think about the total lifecycle cost of some of this AI that we're producing right now, it's truly staggering... When you start thinking about an enterprise that needs to really have ROI, that's where agents can actually provide dramatic value. He believes that co-pilots will continue to have a role as part of a continuum alongside agents, particularly as a mechanism for allowing the human to have a check-in on an agent's progress. This will be important as people build up confidence and trust in the technology. He explains: In a lot of use cases, as with anything else in life, you will want to see the agent show its work before you trust it to do large-scale things on your behalf, as we would with a human employee, as we would in any other case. I don't think it's going to be an instant transition from no agents to agents doing everything in a seamless way. That's not how we can or should adopt the technology -- especially because, as we all know with this technology, it's a popularity engine, it's not a truth engine. At its annual Rising conference in September, Workday introduced its first four agents, along with plans to roll out many more. Alongside agents for recruitment, expense management and succession, the fourth was Workday Optimize, which rather than automating a functional task, identifies and fixes issues in the customer's implementation of the Workday system. For example, there might be a request for manual data entry that could be automatically completed using an integration, or it might recommend a more efficient sequence of steps in a workflow. The genesis of this automation optimizer came as a result of the Workday team feeding data about its systems and user behavior into the training model, which surfaced unexpected insights. Chakraborty explains: You're training this general-purpose engine on how to solve a particular problem, let's say expenses, hiring, whatever your problem, and you realize that that output is just interesting by itself -- just understanding what processes are running in your businesses. As a side-effect of this original training process, we realized that there was actually a really interesting thing happening here, which was understanding how the enterprise was actually operating. We didn't want to keep that to ourselves. We wanted to make that available to customers as well... That is and was originally purposed for a rich set of information for our product managers to understand how to change these tasks, how to optimize these tasks, how to change the user experience of these tasks to improve them. And then you realize that, 'Hey, this is actually valuable information that we should be sharing with the customer, not just using ourselves.' But despite the potential benefits, he is also aware of the risks of unleashing agents on enterprise processes and highly conscious of the need to build in sufficient guardrails. He goes on: As we think about challenges going ahead, as we think about this next age, I think a lot of them will be a recapitulation of challenges we faced before. I'll give you an example. At Workday one of the reasons why we're careful about implementing our agents is, you do not want an agent with unfettered access to your system, any more than you would want a human with unfettered access to your HR or financial system... That's important at Workday, that we don't give the agent capabilities that it shouldn't have, because if we ask the agent go solve this problem, it's going to go solve the problem. It's not going to have an ethical barrier. It's not going to have a conceptual barrier if it needs to go and do something, unless we provide it with that. We do that with humans. We have access control in Workday. There's things I can do that somebody else can't do and vice versa. I shouldn't be able to go in and touch Workday's financials. But there are people at Workday who can, and should be able to. Our agents, as we make them more and more general-purpose, and as they have this pretty awesome capability to do these things, I think we come back to some old problems, including access control. Luckily, Workday is very well placed to solve that problem -- because we do it for humans, to do it for agents. But I do think this is something that our industry in general needs to be thoughtful about. It's important to get this right, he adds, because the speed at which agents can operate allows them not only to work faster but also to spread chaos far faster than humans can: Keep in mind that the agent will be operating in computer time, not human time. If it's going to make mistakes, it's going to make a lot of mistakes very fast -- way faster than a human being ever could. And so you do not want to give an agent unfettered access to your enterprise systems. As well as ensuring that agents can't inadvertently overstep the boundaries, it's also going to be important to guard against the potential for malicious actors to exploit them. He adds: Similarly, we think about the number one way today for hackers or persistent threats to get inside your enterprise is through hijacking one of your employees... This is just an omnipresent threat that we're living under today. Now imagine that I don't try to attack a human, I attack an agent that has lots of access. We've now opened up a new front in the cybersecurity war, and it's not clear that that's a war we're winning right now. I think these are incredibly powerful tools, and that opens up challenges, and we're all in... We are trying to be incredibly thoughtful about how we do it, so we're not creating as many problems as we're solving with this technology, and I think as an industry, we need to be thoughtful about that. Lots of food for thought from someone at the frontier of harnessing agentic AI for enterprise outcomes.
Share
Share
Copy Link
A comprehensive look at how AI shaped policy discussions, enterprise strategies, and marketing practices in 2023, highlighting both opportunities and concerns across various sectors.
As artificial intelligence (AI) continued its rapid advancement in 2023, policymakers and industry leaders grappled with a range of complex issues. The potential for AI to accelerate digital transformation and solve global challenges was tempered by concerns about its societal impacts 1. Key policy issues included the erosion of trusted media, job displacement, mental health effects, data privacy, and healthcare implications.
Some AI vendors raised alarmist scenarios about existential risks to distract from more immediate practical concerns. However, experts warned that the gradual erosion of human agency through numerous small impacts could be just as concerning as apocalyptic scenarios 1. There were calls for better standards and regulations to ensure the responsible development of AI technologies.
For enterprises, 2023 was a year of rethinking digital transformation roadmaps in light of rapid AI innovation 2. While the core principles of digital-first business remained relevant, the landscape for implementing them changed dramatically.
Many companies rushed to adopt generative AI capabilities, but experts cautioned that a significant portion of early AI investments may be wasted 2. Key recommendations for enterprises included:
In the marketing domain, generative AI dominated discussions but practical applications expanded beyond just content creation 5. Key developments included:
Marketers were advised to focus on leveraging AI to enhance customer experiences across the entire buyer journey, rather than chasing "shiny object" technologies 5.
The potential workforce impacts of AI remained a major area of concern. While AI vendors touted job creation potential, historical patterns suggest many organizations may prioritize cost-cutting over augmenting human capabilities 1. Researchers also raised alarms about AI implementations negatively impacting employee well-being and engagement if not carefully managed 1.
There were also growing worries about AI's effects on media ecosystems and public discourse. Some experts warned of the potential for news environments to fracture along social and economic lines, with dire implications for democracy 1.
As AI capabilities continue to advance rapidly, 2023 highlighted the need for a multifaceted approach to harnessing its potential while mitigating risks. Key priorities going forward include:
With thoughtful development and deployment, AI has the potential to drive significant positive change. However, realizing that potential will require ongoing collaboration between technologists, policymakers, business leaders, and civil society 125.
Reference
[1]
[2]
[3]
[4]
[5]
A comprehensive look at the current state of AI adoption in enterprises, highlighting the disconnect between executive enthusiasm and employee skepticism, challenges in implementation, and potential impacts on automation and data management.
4 Sources
4 Sources
DeepSeek's emergence disrupts the AI market, challenging industry giants and raising questions about AI's future development and societal impact.
3 Sources
3 Sources
A comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
6 Sources
A comprehensive look at contrasting views on AI's future impact, from optimistic outlooks on human augmentation to concerns about job displacement and the need for regulation.
4 Sources
4 Sources
UK Labour Party unveils AI regulation plans while UiPath partners with academia for AI innovation. The stories highlight the balance between AI advancement and responsible development.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved