5 Sources
5 Sources
[1]
From hyper-personal assistants to mind-reading tech -- this is how AI will transform everything by 2035
Picture a morning in 2035. Your AI assistant adjusts the lights based on your mood, reschedules your first meeting, reminds your child to take allergy medicine; all without a prompt. It's not science fiction, it's a likely reality driven by breakthroughs in ambient computing, emotional intelligence and agentic AI. Just five years ago, ChatGPT was an unfamiliar name to most, let alone a daily assistant for summarization, search, reasoning and problem-solving. Siri and Alexa were the top names that came to mind when we wanted to call a friend, place an order or dim the lights. Yet now, in 2025, we have a plethora of AI assistants and chatbots to choose from, many of which are free, and which can do a lot more than controlling smart home devices. What feels advanced now may seem utterly simplistic in a decade, reminding us that the most mind-blowing AI capabilities of 2035 might still be beyond our current imagination. By 2035, your AI assistant won't just respond -- it will anticipate. This evolution marks the rise of agentic AI, where assistants proactively act on your behalf using predictive analytics, long-term memory and emotion-sensing. These systems can forecast your needs by analyzing historical and real-time data, helping stay one step ahead of your requests. One assistant that's undergoing such a change is Amazon's Alexa. According to Daniel Rausch, Amazon's VP of Alexa and Echo, "Alexa will be able to proactively anticipate needs based on patterns, preferences, and context -- preparing your home before you arrive, suggesting adjustments to your calendar when conflicts arise, or handling routine tasks before you even think to ask." The AI will remember your child's travel soccer team schedule, reschedule your meetings when it detects stress in your voice and even dim your AR glasses when you appear fatigued. "By 2035, AI won't feel like a tool you 'use'," Rutgers professor Ahmed Elgammal says. "It'll be more like electricity or Wi-Fi: always there, always working in the background." And AIs will respond to more than just your speech. Chris Ullrich, CTO of Cognixion, a Santa Barbara based tech company, is currently developing a suite of AI-powered Assisted Reality AR applications that can be controlled with your mind, your eyes, your head pose, and combinations of these input methods. "We strongly believe that agent technologies, augmented reality and biosensing technologies are the foundation for a new kind of human-computer interaction," he says. AI in 2035 will see, hear and sense -- offering real-time support tailored to you. With multimodal capabilities, assistants will blend voice, video, text and sensor inputs to understand emotion, behavior and environment. This will create a form of digital empathy. Ullrich notes that these advanced inputs shouldn't aim to replicate human senses, but exceed them. "In many ways, it's easier to provide superhuman situational awareness with multimodal sensing," he says. "With biosensing, real-time tracking of heart rate, eye muscle activation and brain state are all very doable today." Amazon is already building toward this future. "Our Echo devices with cameras can use visual information to enhance interactions," says Rausch. "For example, determining if someone is facing the screen and speaking enables a more natural conversation without them having to repeat the wake word." In addition to visual cues, Alexa+ can now pick up on tone and sentiment. "She can recognize if you're excited or using sarcasm and then adapt her response accordingly," Rausch says -- a step toward the emotionally intelligent systems we expect by 2035. Memory is the foundation of personalization. Most AI today forgets you between sessions. In 2035, contextual AI systems will maintain editable, long-term memory. Codiant, a software company focused on AI development and digital innovation, calls this "hyper-personalization," where assistants learn your routines and adjust suggestions based on history and emotional triggers. Rather than relying on one general assistant, you'll manage a suite of specialized AI agents. Research into agentic LLMs shows orchestration layers coordinating multiple AIs; each handling domains like finance, health, scheduling or family planning. These assistants will work together, handling multifaceted tasks in the background. One might track health metrics while another schedules meetings based on your peak focus hours. The coordination will be seamless, mimicking human teams but with the efficiency of machines. Ullrich believes the biggest breakthroughs will come from solving the "interaction layer," where user intent meets intelligent response. "Our focus is on generating breakthroughs at the interaction layer. This is where all these cutting-edge technologies converge," he explains. Rausch echoes this multi-agent future. "We believe the future will include a world of specialized AI agents, each with particular expertise," he says. "Alexa is positioned as a central orchestrator that can coordinate across specialized agents to accomplish complex tasks." He continues, "We've already been building a framework for interoperability between agents with our multi-agent SDK. Alexa would determine when to deploy specialized agents for particular tasks, facilitating communication between them, and bringing their capabilities together into experiences that should feel seamless to the end customer." Perhaps the most profound shift will be emotional intelligence. Assistants won't just organize your day, they'll help you regulate your mood. They'll notice tension in your voice, anxiety in your posture and suggest music, lighting or a walk. Ullrich sees emotion detection as an innovation frontier. "I think we're not far at all from effective emotion detection," he says. "This will enable delight -- which should always be a key goal for HMI." He also envisions clinical uses, including mental health care, where AI could offer more objective insights into emotional well-being. But with greater insight comes greater responsibility. Explainable AI (XAI), as described by arXiv and IBM, will be critical. Users must understand how decisions are made. VeraSafe, a leader in privacy law, data protection, and cybersecurity, underscores privacy concerns like data control and unauthorized use. "Users need to always feel that they're getting tangible value from these systems and that it's not just introducing a different and potentially more frustrating and opaque interface," Ullrich says. That emotional intelligence must be paired with ethical transparency, something Rausch insists remains central to Amazon's mission: "Our approach to trust doesn't change with new technologies or capabilities, we design all of our products to protect our customers' privacy and provide them with transparency and control." He adds, "We'll continue to double down on resources that are easy to find and easy to use, like the Alexa Privacy Dashboard and the Alexa Privacy Hub, so that deeper personalization is a trusted experience that customers will love using." AI may replace jobs, but more so, it will reshape them. An OECD study from 2023 reports that 27% of current roles face high automation risk, especially in repetitive rules-based work. An even more recent Microsoft study highlighted 40 jobs that are most likely to be affected by AI. Human-centric fields like education, healthcare, counseling and creative direction will thrive, driven by empathy, ethics and original thinking. Emerging hybrid roles will include AI interaction designers and orchestrators of multi-agent systems. Writers will co-create with AI, doctors will pair AI with human care and entrepreneurs will scale faster than ever using AI-enhanced tools. AI becomes an amplifier, not a replacement, for human ingenuity. Even the boundaries between work and home will blur. "While Alexa+ may be primarily focused on home and personal use today, we're already hearing from customers who want to use it professionally as well," says Rausch. "Alexa can manage your calendar, schedule meetings, send texts and extract information from documents -- all capabilities that can bridge personal and professional environments." A 2025 study from the University of Pennsylvania and OpenAI found that 80% of U.S. workers could see at least 10% of their tasks impacted by AI tools, and nearly 1 in 5 jobs could see more than half their duties automated with today's AI. Forbes reported layoffs rippling across major companies like marketing, legal services, journalism and customer service as generative AI takes on tasks once handled by entire teams. Yet the outlook is not entirely grim. As the New York Times reports, AI is also creating entirely new jobs, including: Automation Alley's vision of a "new artisan" is gaining traction. As AI lifts mental drudgery, skilled manual work -- craftsmanship, artistry and hands-on innovation -- may see a renaissance. AI won't kill creativity; it may just unlock deeper levels of it. Navigating the shift to an AI-augmented society demands preparation. The World Economic Forum emphasizes lifelong learning, UBI (universal basic income) experimentation and education reform. Workers must develop both technical and emotional skills. Curricula must evolve to teach AI collaboration, critical thinking and data literacy. Social safety nets may be required during reskilling or displacement. Ethics and governance must be built into AI design from the start, not added after harm occurs. Ullrich notes the importance of designing with inclusivity in mind. "By solving the hard design problems associated with doing this in the accessibility space, we will create solutions that benefit all users," he says. Technologies developed for accessibility, like subtitles or eye tracking -- often lead to mainstream breakthroughs. As IBM and VeraSafe highlight, trust hinges on explainability, auditability and data ownership. Public understanding and control are key to avoiding backlash and ensuring equitable access. As AI augments more aspects of life, our relationship with it will define the outcomes. Daniel Rausch believes the key lies in meaningful connection: "The goal isn't just responding to commands but understanding your life and meaningfully supporting it." We must ensure systems are inclusive, transparent and designed for real value. As AI grows in intelligence, the human role must remain centered on judgment, empathy and creativity. Ultimately, the question isn't "What can AI do?" It's "What should we let AI do?" By 2035, AI will be a planner, therapist, tutor and teammate. But it will also reflect what we value -- and how we choose to interact with it. Ullrich emphasizes that the future won't be defined just by what AI can do for us, but how we engage with it: "Voice may be useful in some situations, gesture in others, but solutions that leverage neural sensing and agent-assisted interaction will provide precision, privacy and capability that go well beyond existing augmented reality interaction frameworks." Yet, amid this evolution, a deeper question of trust remains. Emotional intelligence, explainability and data transparency will be essential, not just for usability but for human agency. "Services that require private knowledge need to justify that there is sufficient benefit directly to the user base," Ullrich says. "But if users see this as a fair trade, then I think it's a perfectly reasonable thing to allow." As AI capabilities rise, we must consciously preserve human ones. The most meaningful advances may not be smarter machines, but more mindful connections between humans and technology. The promise of AI is so much more than productivity, it's dignity, inclusion and creativity. If we design wisely, AI won't just help us get more done, it will help us become more of who we are. And that is something worth imagining.
[2]
AI is not a strategy: why business leaders need better questions, not louder directives
The AI boom is here again. This time, it is louder, more expensive, and often more misguided. Across industries, leadership teams are sprinting to adopt AI with urgency that often feels more reactive than rational. But saying "use AI" is like telling your team to "be innovative." It is more a sentiment than a strategy. In this moment of mass experimentation, some companies are finding signal amid the noise. They are not just layering AI on top of business as usual, but using it to solve problems that already mattered. And they are seeing returns not because they adopted AI, but because they understood why they were doing it. More homework, less hype "Every vendor is talking about their AI solutions," said industry analyst and CMA Intelligence founder, Chris Marron. "There's a lot of hype and some real substance too, but too many businesses jump straight to automation without understanding what they're automating." That impulse to chase productivity gains, especially by cutting labor, is flawed and dangerous. "If you're using AI to automate labor, you're probably doing the wrong thing," Marron explained. "When the total talent pool isn't actually shrinking, cutting people just shrinks your footprint in the market. Use AI to help the same team deliver 40 % more, not to shed 40 % of the team." Beyond just bad math, the issue is also bad framing. Automation does not inherently translate to better customer experiences, increased revenue, or smarter operations. In fact, it can create more work if it is not paired with clear goals. Why this AI wave is different In past tech revolutions, from the rise of the internet to the advent of cloud computing, enterprise businesses held the advantage. They had the capital, infrastructure, and IT teams to adopt early, experiment widely, and scale quickly. This time, the AI curve looks different. AI is the first major wave where being big may actually slow you down. "AI is reversing the traditional power dynamic in business communications. For the first time, small and mid-sized businesses can access enterprise-grade capabilities without the overhead," said Dimitri Osler, CIO of Wildix. With models and tools available off the shelf and open APIs making integration accessible, the AI arms race is being led not just by deep pockets, but by speed and clarity of purpose. Enterprise companies may still have scale, but they are no longer the only ones with power. Use cases, not hopes Lowe's did not wander into AI. It went in with a map. "Success doesn't come from chasing AI's novelty," said Chandhu Nair, SVP of Data, AI and Innovation at Lowe's. "It comes from aligning its development with your core business values and long-term vision." That vision led to the creation of Mylow, a generative AI-powered assistant that helps both customers and associates. Unlike a generic chatbot, Mylow understands Lowe's specific inventory, installation services, and customer pain points. For store associates, Mylow Companion provides the same level of support on the sales floor, spreading expertise across departments. These tools were built to work. "If it doesn't move the needle on conversion, efficiency, or customer experience, it doesn't get built," Nair said. It is a stark contrast to what Marron describes as "go do the AI thing" KPIs. These are vague executive mandates with no tactical direction. Agentic AI that actually thinks and acts Wildix, a European-born communications company with a global footprint, has taken that strategic clarity and turned it into something practical with the recent launch of its embedded Agentic AI capabilities. Instead of building tools that simply respond to prompts, Wildix engineers AI that behaves like autonomous digital teammates, not for novelty's sake, but to solve the right problems in the hands of the right customers "In healthcare, we've seen our AI flatten the three daily peaks of appointment demand," said Stewart Donnor, Wildix Global Head of Sales Engineering. "It handles routine bookings and inquiries around the clock, freeing human staff to focus on more critical issues." This is automation used to support humans, not replace them. And it is deeply tailored. Wildix partners with each client to identify their operational pinch points, then builds systems that solve those exact problems. "The problem isn't that leaders expect too much from AI," said Steve Osler. "It's that they expect the wrong things, faster output instead of smarter operations. It's about saving time. And time, when given back to humans, becomes upsell, innovation, loyalty. That's where the real growth lives." The brands that grow are the ones using AI to do more, not to eliminate people, but to amplify them. Guardrails before glory As AI becomes more capable, the risks grow. "The pace of AI development has exceeded even the most ambitious projections," Nair noted. "That acceleration has required agility, but also discipline." Lowe's maintains that discipline through its AI Transformation Office, a cross-functional team that includes engineering, legal, and product leaders. Every project is evaluated for value and risk. Lowe's also integrates NeMo Guardrails from NVIDIA to ensure conversational safety and privacy across its AI systems. Wildix takes a similarly careful approach. "Our platform is secure by design," said Donnor. "We maintain strict data privacy compliance, including GDPR and HIPAA, and do not share customer data across tenants." This level of diligence matters more than ever. As Marron pointed out, "If your customer-facing AI gives bad information, that's not just an error. That's a new company policy. That's what happened with Air Canada. You're liable." A smarter adoption framework To move from vague excitement to strategic implementation, companies need a better framework. One that starts with asking the right questions. What specific task or problem do we want to solve? What kind of data do we already have? What outcome can we measure? Where can humans stay in the loop? Donnor describes their approach as starting small with practical tasks. "Start with the low-hanging fruit," he said. Simple use cases that prove ROI fast, then expand. For some clients, that means using Wildix's Kite widget to turn a website into a functional FAQ assistant. For others, it is a full-scale scheduling and triage automation. "You don't have to solve everything at once," Donnor said. "But you do need to start with something real. If the AI can't show ROI, it's not ready." Post-Google search, AI portals, and brand relevance As customer behavior shifts from search engines to AI interfaces, brands face a new challenge. How do you stay visible when users stop Googling and start asking ChatGPT? We're entering a world where AI becomes the first point of contact. That means brands need to build AI-ready customer portals now or risk losing the relationship entirely. The biggest challenge is keeping control of that relationship. "The brand doesn't want OpenAI to own that interaction," Marron said. "So companies will invest in portals that offer fast, AI-driven support while still keeping the customer in their ecosystem." And that brings us back to the central theme. Not AI for its own sake, but AI as infrastructure. Not automation to shrink the business, but automation to stretch its capabilities. Final word: you cannot outsource thinking AI tools will continue to improve. The real question is whether your organization's thinking will improve, too. As Marron put it, "Don't look at what you can automate away. Look at what you can enable with that." The companies that win this next chapter will not be the ones who jumped in first. They will be the ones who paused, asked better questions, and made AI do something useful. That starts with being able to articulate the problem you're trying to solve. Steve Osler warns, "The real risk isn't bad AI. It's lazy thinking. If you can't explain what problem you're solving, no tool will save you." VentureBeat newsroom and editorial staff were not involved in the creation of this content.
[3]
AI's promise of opportunity masks a reality of managed displacement
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Cognitive migration is underway. The station is crowded. Some have boarded while others hesitate, unsure whether the destination justifies the departure. Future of work expert and Harvard University Professor Christopher Stanton commented recently that the uptake of AI has been tremendous and observed that it is an "extraordinarily fast-diffusing technology." That speed of adoption and impact is a critical part of what differentiates the AI revolution from previous technology-led transformations, like the PC and the internet. Demis Hassabis, CEO of Google DeepMind, went further, predicting that AI could be "10 times bigger than the Industrial Revolution, and maybe 10 times faster." Intelligence, or at least thinking, is increasingly shared between people and machines. Some people have begun to regularly use AI in their workflows. Others have gone further, integrating it into their cognitive routines and creative identities. These are the "willing," including the consultants fluent in prompt design, the product managers retooling systems and those building their own businesses that do everything from coding to product design to marketing. For them, the terrain feels new but navigable. Exciting, even. But for many others, this moment feels strange, and more than a little unsettling. The risk they face is not just being left behind. It is not knowing how, when and whether to invest in AI, a future that seems highly uncertain, and one that is difficult to imagine their place in. That is the double risk of AI readiness, and it is reshaping how people interpret the pace, promises and pressure of this transition. Is it real? Across industries, new roles and teams are forming, and AI tools are reshaping workflows faster than norms or strategies can keep up. But the significance is still hazy, the strategies unclear. The end game, if there is one, remains uncertain. Yet the pace and scope of change feels portentous. Everyone is being told to adapt, but few know exactly what that means or how far the changes will go. Some AI industry leaders claim huge changes are coming, and soon, with superintelligent machines emerging possibly within a few years. But maybe this AI revolution will go bust, as others have before, with another "AI winter" to follow. There have been two notable winters. The first was in the 1970s, brought about by computational limits. The second began in the late 1980s after a wave of unmet expectations with high-profile failures and under-delivery of "expert systems." These winters were characterized by a cycle of lofty expectations followed by profound disappointment, leading to significant reductions in funding and interest in AI. Should the excitement around AI agents today mirror the failed promise of expert systems, this could lead to another winter. However, there are major differences between then and now. Today, there is far greater institutional buy-in, consumer traction and cloud computing infrastructure compared to the expert systems of the 1980s. There is no guarantee that a new winter will not emerge, but if the industry fails this time, it will not be for lack of money or momentum. It will be because trust and reliability broke first. Cognitive migration has started If "the great cognitive migration" is real, this remains the early part of the journey. Some have boarded the train while others still linger, unsure about whether or when to get onboard. Amidst the uncertainty, the atmosphere at the station has grown restless, like travelers sensing a trip itinerary change that no one has announced. Most people have jobs, but they wonder about the degree of risk they face. The value of their work is shifting. A quiet but mounting anxiety hums beneath the surface of performance reviews and company town halls. Already, AI can accelerate software development by 10 to 100X, generate the majority of client-facing code and compress project timelines dramatically. Managers are now able to use AI to create employee performance evaluations. Even classicists and archaeologists have found value in AI, having used the technology to understand ancient Latin inscriptions. The "willing" have an idea of where they are going and may find traction. But for the "pressured," the "resistant" and even those not yet touched by AI, this moment feels like something between anticipation and grief. These groups have started to grasp that they may not be staying in their comfort zones for long. For many, this is not just about tools or a new culture, but whether that culture has space for them at all. Waiting too long is akin to missing the train and could lead to long-term job displacement. Even those I have spoken with who are senior in their careers and have begun using AI wonder if their positions are threatened. The narrative of opportunity and upskilling hides a more uncomfortable truth. For many, this is not a migration. It is a managed displacement. Some workers are not choosing to opt out of AI. They are discovering that the future being built does not include them. Belief in the tools is different from belonging in the system tools are reshaping. And without a clear path to participate meaningfully, "adapt or be left behind" begins to sound less like advice and more like a verdict. These tensions are precisely why this moment matters. There is a growing sense that work, as they have known it, is beginning to recede. The signals are coming from the top. Microsoft CEO Satya Nadella acknowledged as much in a July 2025 memo following a reduction in force, noting that the transition to the AI era "might feel messy at times, but transformation always is." But there is another layer to this unsettling reality: The technology driving this urgent transformation remains fundamentally unreliable. The power and the glitch: Why AI still cannot be trusted And yet, for all the urgency and momentum, this increasingly pervasive technology itself remains glitchy, limited, strangely brittle and far from dependable. This raises a second layer of doubt, not only about how to adapt, but about whether the tools we are adapting to can deliver. Perhaps these shortcomings should not be a surprise, considering that it was only several years ago when the output from large language models (LLMs) was barely coherent. Now, however, it is like having a PhD in your pocket; the idea of on-demand ambient intelligence once science fiction almost realized. Beneath their polish, however, chatbots built atop these LLMs remain fallible, forgetful and often overconfident. They still hallucinate, meaning that we cannot entirely trust their output. AI can answer with confidence, but not accountability. This is probably a good thing, as our knowledge and expertise are still needed. They also do not have persistent memory and have difficulty carrying forward a conversation from one session to another. They can also get lost. Recently, I had a session with a leading chatbot, and it answered a question with a complete non-sequitur. When I pointed this out, it responded again off-topic, as if the thread of our conversation had simply vanished. They also do not learn, at least not in any human sense. Once a model is released, whether by Google, Anthropic, OpenAI or DeepSeek, its weights are frozen. Its "intelligence" is fixed. Instead, continuity of a conversation with a chatbot is limited to the confines of its context window, which is, admittedly, quite large. Within that window and conversation, the chatbots can absorb knowledge and make connections that serve as learning in the moment, and they appear increasingly like savants. These gifts and flaws add up to an intriguing, beguiling presence. But can we trust it? Surveys such as the 2025 Edelman Trust Barometer show that AI trust is divided. In China, 72% of people express trust in AI. But in the U.S., that number drops to 32%. This divergence underscores how public faith in AI is shaped as much by culture and governance as by technical capability. If AI did not hallucinate, if it could remember, if it learned, if we understood how it worked, we would likely trust it more. But trust in the AI industry itself remains elusive. There are widespread fears that there will be no meaningful regulation of AI technology, and that ordinary people will have little say in how it is developed or deployed. Without trust, will this AI revolution flounder and bring about another winter? And if so, what happens to those who have invested time, energy and their careers? Will those who have waited to embrace AI be better off for having done so? Will cognitive migration be a flop? Some notable AI researchers have warned that AI in its current form -- based primarily on deep learning neural networks upon which LLMs are built -- will fall short of optimistic projections. They claim that additional technical breakthroughs will be needed for this approach to advance much further. Others do not buy into the optimistic AI projections. Novelist Ewan Morrison views the potential of superintelligence as a fiction dangled to attract investor funding. "It's a fantasy," he said, "a product of venture capital gone nuts." Perhaps Morrison's skepticism is warranted. However, even with their shortcomings, today's LLMs are already demonstrating huge commercial utility. If the exponential progress of the last few years stops tomorrow, the ripples from what has already been created will have an impact for years to come. But beneath this movement lies something more fragile: The reliability of the tools themselves. The gamble and the dream For now, exponential advances continue as companies pilot and increasingly deploy AI. Whether driven by conviction or fear of missing out, the industry is determined to move forward. It could all fall apart if another winter arrives, especially if AI agents fail to deliver. Still, the prevailing assumption is that today's shortcomings will be solved through better software engineering. And they might be. In fact, they probably will, at least to a degree. The bet is that the technology will work, that it will scale and that the disruption it creates will be outweighed by the productivity it enables. Success in this adventure assumes that what we lose in human nuance, value and meaning will be made up for in reach and efficiency. This is the gamble we are making. And then there is the dream: AI will become a source of abundance widely shared, will elevate rather than exclude, and expand access to intelligence and opportunity rather than concentrate it. The unsettling lies in the gap between the two. We are moving forward as if taking this gamble will guarantee the dream. It is the hope that acceleration will land us in a better place, and the faith that it will not erode the human elements that make the destination worth reaching. But history reminds us that even successful bets can leave many behind. The "messy" transformation now underway is not just an inevitable side effect. It is the direct result of speed overwhelming human and institutional capacity to adapt effectively and with care. For now, cognitive migration continues, as much on faith as belief. The challenge is not just to build better tools, but to ask harder questions about where they are taking us. We are not just migrating to an unknown destination; we are doing it so fast that the map is changing while we run, moving across a landscape that is still being drawn. Every migration carries hope. But hope, unexamined, can be risky. It is time to ask not just where we are going, but who will get to belong when we arrive. Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.
[4]
It's time for an AI -triggered correction - and many of you will feel the pain
Your email inbox and text messages are likely getting choked with entreaties from application software vendors, consultants, integrators, MSPs, offshore developers, etc. They all want you to know that: I promise this won't be diatribe on AI marketing but let's set the context for the correction. AI market messaging is currently over the top, hyperbolic and feels very much like automobile television advertising. Just slip in 'AI' into these tired, tried and true car ad slogans and you have today's state of the AI hype: The greater AI ecosystem is ginning away trying to manufacture urgency. They want to trigger a stampede of market interest and sales in AI. And because many AI demos look so darn cool, slick and powerful, these marketers have lots of ways to make potential buyers go 'Ooh' and 'Ahh'. But just because tech sellers have cool demos doesn't always translate into sales or massive numbers of sales in a very short time. Nope - buyers are more savvy, cautious and pre-meditated than that. And when too much vendor enthusiasm outruns the actual buying mindsets out there, then a correction is highly likely. Specific to AI, not every company will adopt AI the same way, at the same time or for the same reasons. If marketers and investment bankers realized this, they'd have more measured market expectations. Ronald Reagan once opined "Trust, but verify". AI could be a big transformative change agent but, and this is a key point, where are the numerous proof points that show: About the only proof points out there are lots of AI pilot efforts or tests. A call last night with a top consulting marketing executive was eye-opening as her team is desperately looking for client stories re: AI and their field teams. They aren't getting any stories or anyone willing to go on the record. They aren't the only ones with this issue. A lack of great AI customer proof points is also a problem at software user conferences. Again, the hype machinery is outrunning the AI reality. Tech marketers, I'm convinced are either very forgetful, easily excited and/or too young to remember other technology waves. If they did, they'd likely temper their AI enthusiasm some. Let's look back a bit then. Did every business move their systems to the cloud? That's a big NO. Two of the largest ERP vendors spent material time with analysts this Spring discussing their latest programs to migrate their more laggardly customers off of old on-premise versions of their software. And that's just ERP. If you look at the hardware/software running on capital/machine equipment, you'd find technology that's several decades old and, quite likely, not recently patched. Likewise, even if firms migrated to a cloud solution, they might have neglected to get a multi-tenant cloud solution. Did the move from batch systems to online real-time software happen in a snap? No. Has there ever been one of these big shifts in technology where most every firm moved off one environment onto another in a year or two? I haven't seen one and I've lived through decades and decades of these major change opportunities. Consumer technology also has these long-tail technology change phenomena. Just this week, the New York Times noted that AOL is going to finally cease its dial-up modem access that's been in use for decades. Why did it take so long for customers to move to broadband, cellular or other connection capabilities is not the key point but rather we should look at how long pre-existing technologies can stay in use. Newton's Law of Inertia (i.e., a body at rest remains at rest and a body in motion remains in motion unless acted upon a new force) somewhat explains the power of incumbent technologies, processes, workflows, etc. Companies will only be able to implement so much AI capability at a time while other use cases will need to continue with old tech. This explains why AI adoption will be measured regardless of the AI cheerleading and marketing hype being hurled at buyers. The adoption of new technologies might follow a bell-curve distribution (more on that later) or some other rollout path. Regardless of the exact path, it doesn't happen all at once for all firms. Vendors and consultants might want companies to adopt AI en masse and en toto but it won't happen. Why? Amy Wilson is great software executive (ex-Workday & SAP) who is doing advisory work these days with colleague Meg Bear. This week, she penned an interesting piece on LinkedIn about what AI's Transformation Maturity Model might look like. According to Wilson: What I'm seeing: organizations that celebrate tactical AI wins today will need to move to collaborative invention of entirely new workflows tomorrow. And that "tomorrow" is coming faster than anyone expects. 've been sketching out what AI transformation maturity might look like - how NewCos can start with strategic reinvention while established companies need to get started with tactical wins first. And how the expectations keep accelerating for everyone. For established firms, Amy sees companies deploying AI initially for tactical improvements followed by process improvements. Later, strategic transformations can occur followed by organizational re-invention. If a company can not only move through these phases but do so in an ever-quickening manner, they can achieve 'exponential impact'. I'll leave it to you read Amy's piece for a more complete discussion. I thought Amy's thinking was spot on. It provides a logical progression in how AI can change a firm, its people, processes and more. The early stages help with more immediate issues that clearly have a great internal and operational focus. Later stages deal with business and competitive advantage matters. When I saw her piece, I believed it could be supplemented with additional perspectives re: the specific business needs of prospective AI users. This additional lens is needed as not all prospective tech buyers need the same thing at the same time. In fact, their personal situation informs how much tech they will buy, what kinds of problems the tech will solve and how far the tech will propel their needs for competitive parity or competitive advantage. While I didn't use this graphic in my LinkedIn response, it helps bring the following comments to life: Bottom line: AI's role changes based on the business need and unique challenges each firm is facing at a point in time. For this reason, AI adoption, like prior technology innovation waves, will take time. Not all buyers will have AI as a top priority in the short-term. A couple of decades ago, Geoffrey Moore penned Crossing the Chasm. This book has been a must-read in Silicon Valley circles since its initial publication. Moore's key thesis is that a technology adoption curve exists and that five types of technology buyers exist. These buyers are the: As you can surmise, sales and product uptake of new technologies like AI will follow a predictable adoption curve with the risk tolerant buying this tech now with a number of others adopting some measure of 'wait and see' patience. Each of Moore's groups takes a while before they pull the trigger. Not every firm is risk tolerant or flush with cash. Some like their tech to be low/no risk, very proven and/or low cost. For some, tech is a commodity item. It's a means to an end albeit a low cost/low value one. Some tech buyers want competitive advantage - some competitive parity. Sales of business tech rarely happens at all once. These deals are highly dependent on a number of factors. These include: So, is the hype cycle for AI assuming a market uptake that's inconsistent with past technology adoption rates? In a recent LinkedIn post, colleague Josh Greenbaum noted: So far investments in LLM-based AI have proven adept at solving 50-cent problems, some of which are genuinely important to their users. Solving million-dollar problems with this technology has proven to be very difficult. As such there isn't the critical mass of new million-dollar capabilities that can generate enough value to justify the enormous investments in this form of AI on the part of tech companies, much less their erstwhile customers. Solving thousands, if not tens of thousands of 50-cent problems isn't going to be enough, particularly when the cost of building all those data centers is exacerbating an already hypercritical problem with electricity generation and greenhouse gas emissions. He concluded his remarks with: "Vendors: it's looking like write-off time." He's right and the astronomical valuations some firms have are just unsupportable. Greenbaum also linked a recent New York Times article that stated: Nearly four decades ago, when the personal computer boom was in full swing, a phenomenon known as the "productivity paradox" emerged. It was a reference to how, despite companies' huge investments in new technology, there was scant evidence of a corresponding gain in workers' efficiency. Today, the same paradox is appearing, but with generative Artificial Intelligence. According to recent research from McKinsey & Company, nearly eight in 10 companies have reported using generative A.I., but just as many have reported "no significant bottom-line impact. But the percentage of companies abandoning most of their A.I. pilot projects soared to 42% by the end of 2024, up from 17% the previous year, according to a survey of more than 1,000 technology and business managers by S&P Global, a data and analytics firm. Projects failed not only because of technical hurdles, but often because of "human factors" like employee and customer resistance or lack of skills, said Alexander Johnston, a senior analyst at S&P Global. Two years ago, I listened with amazement as different application software vendors were predicting record revenue increases as they monetized their puny AI capabilities (e.g., remember how an AI Job Description Generator was going to move software companies onto a Fortune 5 spot?). It didn't happen as these executives were prognosticating in a vacuum. They ignored basic business decision making principles like: Almost all new technologies possess a long tail of market adoption. While lots of hypesters want you to believe that "This technology is different!" I haven't seen any evidence, none, that this is the case. AI may not, after all, be the one technology that breaks the mold. There will be a long tail - that's a given. AI will trigger a period of lots of early experimentation. But experimentation is not the same as massive, large-scale adoption. That's going to take a while. So, let's all dial down the marketing hysteria and start to approach AI in a sane, rational, premediated and eyes-wide-open manner. It's the best thing we can all do for our respective firms, people and business processes. There's a big market conflict between irrational or hyperbolic marketers and the buying realists out there. Greenbaum's right - a correction is coming and it's going to hurt. The cart is miles ahead of the horse right now. We're all seeing this...
[5]
AI uptake - what does marketing's embrace of the technology tell us?
Few industries have adopted artificial intelligence (AI) as enthusiastically as marketing. But for a sector concerned with both message and medium, what does its embrace of the technology tell us? And what might the subtext be? Let's look at a survey from an organization called Outcomes Rocket, which (on the face of it) presents us with an infinite regress in research terms. That's because Outcomes Rocket is itself a strategic marketing consultancy, one focused on driving growth and engagement in Healthcare, via technologies such as AI. Clients include private practitioners, US healthcare organizations, and technology specialists like Care.ai. As a result, any survey it conducts risks being a hall of mirrors, reflecting its own concerns back at itself and its customers. But that's not to say that its findings are wrong, untrustworthy, or indeed, entirely positive (more on the latter in a moment). It is more that AI already exists in an environment of total hype, so it is hard to take any evidence of its effects at face value. To digress for a moment, the over-invested AI industry is largely to blame for that noise, with The Economist observing this year that company valuations are "verging on the unhinged", driven by industry CEOs' futurist pronouncements about "PhD-level" LLMs, genius machines, and superintelligence. But that bubble was pricked by OpenAI's Sam Altman this month, when he said that AGI (artificial general intelligence) was "not a super-useful term" anymore, despite the quest for it being his company's founding aim! The conclusion? AGI served its purpose as investment bait, but as evidence piles up that it will never be reached via Large Language Models (LLMs) and chatbots, it's time to move on and find new messages to keep the fanboys clicking. In short: it has all just been marketing. But back to Outcomes Rocket, whose report 'AI in Marketing 2025' is -- against all odds -- surprisingly useful, and perhaps more revealing and controversial than its commissioners intended. The document is based on research among 1,229 industry professionals (how many are clients is not revealed). Just under 22% of respondents are CEOs, CMOs, directors, or senior managers, and nearly 48% are mid-tier managers or sector specialists. The rest? Entry-level respondents, says the company. The report promises a spread of sizes, too: over 26% are "large organizations" -- though at "over 251 employees" that assessment is questionable -- 30% are medium sized (50-250 employees), and just under 44% are small operations. In reality, then, this appears to be an SME survey. According to Outcomes Rocket "the most impressive part" is that just under 90% of marketers already include AI in their processes. However, the report also notes that generative AI (gen AI) is being used by 94% of respondents. So, that four percent discrepancy is presumably due to a subset of marketers experimenting with tools rather than deploying them formally. Among gen AI users, ChatGPT predictably dominates, used by just under 95% of the group, but Google Gemini seems to be on the rise, deployed by nearly half (49.5%). Coming up on the outside is Anthropic's Claude on 13.8%. So, it will be interesting to see what happens to Dario Amodei's company now that a class action against it for scraping millions of pirated books has been greenlit in the US. Anthropic may lose that case, and any attempt by it to set the action or judgement aside will damage trust in the industry. There are two fascinating statistics in the mix, however. First, X.ai's Grok is just a small fry in this space, with only eight percent marketing adoption -- despite it being plugged directly into X, a forum that was once critical for customer engagement. Perhaps the overt politicization of that platform (and its adoption by CEOs as a carrier of AI cultism) has done more damage than leaders realize. But second comes the real surprise: this US-centric survey finds China's DeepSeek on 13.4% marketing adoption -- neck and neck with Claude and eclipsing Grok, all from a standing start. So, it seems that China's lower costs are persuasive, despite the security concerns. So, what are marketers using ChatGPT, Gemini, Claude and other tools for -- including the likes of Jasper and Copy.ai? The answer is to create the content itself: entire campaigns, blog posts, and more. The report says: Generative AI is mainly applied to create content: 82.4% of marketers are utilizing it to write articles, create social media captions, develop creatives, and generate ideas like headlines or taglines. I must say, I find this across-the-board outsourcing of creativity depressing, with campaign text, images, video, messaging, captions, targeting, and more, now deployable in minutes, or less, via AI. I suspect, in time, that marketers may find this application offers diminishing returns, except in the time saved by speeding up those processes. A tide of boredom -- and in some quarters, real anger -- is rising about AI slop, hallucinations, shouty machine text, and samey images, many of which are trained on copyrighted work. In such a world, the authentic, the human, and the original seem likely to engage more. Indeed, this is implicit in Outcome Rocket's own data. The report notes that over 93% of marketers "frequently encounter issues with AI-generated marketing content, such as errors, biases, or irrelevant outputs". Which surely begs the question, why do just 71% of them review or edit that content before publishing it? Meanwhile, over 42% of marketers say they are confident they can spot AI-generated content, such as text and images. Despite this, it seems they are happy to churn in out -- and in a significant number of cases, not even to check it. This hardly speaks of respect for their audiences. After all, if marketers can spot machine content, you can bet that customers can too. Indeed, the real punchline is this: just 21% of marketers report enhanced customer engagement from using gen AI. Ouch. The Outcomes Rocket survey provides other glimpses into the minds of AI-enabled marketers. Unlocking the value of trusted data is -- you would think -- the most useful purpose of this technology: yet analytics (58% of respondents), research (55%), and personalization (46%) lag a long way behind using ChatGPT to churn out more and more stuff. This is yet more evidence that, despite its (now deprecated) claims of pursuing AGI, curing cancer, and solving the world's most urgent problems, much of OpenAI's subscription revenue undoubtedly comes from people using it as a quick and dirty means to make stuff (creatives' copyright be damned!). Again, this is implicit in the survey data. Asked what the key benefits of the technology are, there is an overwhelming winner: it saves time, say 86% of respondents. Comparatively few (45%) report that AI improves content quality -- which is shocking, given the big majority who use it for text and image creation. Yet nowhere on the list is making smarter decisions, for example, or revealing new discoveries and insights: all the things that vendors promise their technology will do for humanity. And as for achieving a higher return on investment (ROI) -- the factor that venture capital investors look for -- it is reported by just 10% of marketers. Wow. Indeed, there is more bad news for the industry. Nearly 90% of marketers believe AI will "cost them jobs in the next two to three years" -- mostly in junior roles. Nearly two-thirds of respondents believe those losses will be either moderate or significant. Set alongside the earlier finding that marketers are using AI to train new professionals in content creation, this is another negative impact: a handing over of creativity to machines, and a pulling up of the ladder to the next generation of marketers, making it harder for juniors to gain a foothold in the industry. Yet despite these poor results -- the low customer engagement, the absent ROI, the setting adrift of the junior workforce, and the high instance of problems with gen AI content -- it seems that marketers want more of it. Asked what the technology's biggest impact will be over the next two to three years, 79% said the introduction of better content-generation tools. And there was a gloomy footnote for those of us who lament the contemptuous attitude of AI vendors to creators' copyright -- and even to human creativity itself. Just 13% of marketers said greater ethical AI transparency will be important to them. It seems they just don't care. Interesting stuff -- yet hardly any cause for celebration. The key takeaways for me are this: * First, here is yet another report (after dozens of others) showing that organizations are not deploying AI to be smarter; they just want to save money and time. * Second, demonstrating a hard ROI is not happening in this leading adopter industry. * And third, the report arrived with a fanfare of upbeat messaging from Outcomes Rocket itself: nearly all marketers are using generative AI! shrieked its own marketing. That was the message they wanted the media to hear. And this is the problem we all face in the AI Spring: the endless noise and hype is drowning out the warning messages beneath. The fact is, all is not well in AI-enabled sectors, and the reality of adoption contradicts vendors' claims about what the technology is really for. A cure for cancer, as Altman boasts? Nope, a cheap means of churning out content fast. In this new reality, pretending that everything is fine when it clearly isn't risks triggering another AI winter, as investors realize that users are just not getting what vendors promised them. And that ROI: where is it?
Share
Share
Copy Link
A comprehensive look at how AI is expected to evolve and impact various aspects of life and business by 2035, including personal assistants, business strategies, and societal changes.
By 2035, AI assistants are expected to become far more sophisticated and integrated into our daily lives. Amazon's VP of Alexa and Echo, Daniel Rausch, envisions a future where "Alexa will be able to proactively anticipate needs based on patterns, preferences, and context"
1
. These assistants will not just respond to commands but will act autonomously, managing tasks before users even think to ask.Source: Tom's Guide
Chris Ullrich, CTO of Cognixion, is developing AI-powered Assisted Reality AR applications that can be controlled with the mind, eyes, and head movements. He believes that "agent technologies, augmented reality and biosensing technologies are the foundation for a new kind of human-computer interaction"
1
. This points to a future where AI interfaces become more intuitive and seamlessly integrated with human cognition.Future AI systems are expected to be multimodal, blending inputs from voice, video, text, and sensors to understand emotion, behavior, and environment. Amazon is already building towards this, with Rausch noting that their Echo devices can use visual information to enhance interactions
1
. By 2035, these systems are predicted to offer a form of digital empathy, adapting responses based on users' emotional states.While AI adoption is accelerating across industries, experts warn against hasty implementation without clear strategies. Chris Marron, an industry analyst, cautions that "too many businesses jump straight to automation without understanding what they're automating"
2
. The focus should be on using AI to enhance productivity rather than simply replace workers.Lowe's approach to AI implementation serves as a model for strategic adoption. Chandhu Nair, SVP of Data, AI and Innovation at Lowe's, emphasizes that "Success doesn't come from chasing AI's novelty. It comes from aligning its development with your core business values and long-term vision"
2
. Their AI assistant, Mylow, was developed with specific business goals in mind, focusing on improving customer experience and operational efficiency.Source: VentureBeat
The rapid adoption of AI is reshaping the job market. Christopher Stanton, a Harvard University Professor, describes AI as an "extraordinarily fast-diffusing technology"
3
. This speed of adoption is creating both opportunities and challenges for workers across various sectors.Some professionals are embracing AI, integrating it into their workflows and creative processes. However, for many others, this transition is causing anxiety about job security and the need to adapt quickly. The narrative of "adapt or be left behind" is becoming more prevalent, with Microsoft CEO Satya Nadella acknowledging that the transition to the AI era "might feel messy at times, but transformation always is"
3
.Related Stories
As AI capabilities grow, so do the associated risks. Companies like Lowe's are establishing AI Transformation Offices to evaluate projects for both value and risk
2
. There's an increasing focus on implementing guardrails to ensure conversational safety, privacy, and compliance with regulations like GDPR and HIPAA.The marketing industry has been particularly quick to adopt AI technologies. A survey by Outcomes Rocket found that nearly 90% of marketers already include AI in their processes, with generative AI being used by 94% of respondents
5
. However, this rapid adoption also raises concerns about the quality and authenticity of AI-generated content, with over 93% of marketers frequently encountering issues such as errors, biases, or irrelevant outputs5
.Source: diginomica
As we move towards 2035, the integration of AI into various aspects of life and business seems inevitable. However, the path to this AI-enhanced future is not without challenges, requiring careful consideration of ethical, strategic, and societal implications.
Summarized by
Navi
[1]
[2]
19 Aug 2025•Technology
03 Dec 2024•Business and Economy
21 Dec 2024•Technology
1
Business and Economy
2
Technology
3
Business and Economy