2 Sources
[1]
From hyper-personal assistants to mind-reading tech -- this is how AI will transform everything by 2035
Picture a morning in 2035. Your AI assistant adjusts the lights based on your mood, reschedules your first meeting, reminds your child to take allergy medicine; all without a prompt. It's not science fiction, it's a likely reality driven by breakthroughs in ambient computing, emotional intelligence and agentic AI. Just five years ago, ChatGPT was an unfamiliar name to most, let alone a daily assistant for summarization, search, reasoning and problem-solving. Siri and Alexa were the top names that came to mind when we wanted to call a friend, place an order or dim the lights. Yet now, in 2025, we have a plethora of AI assistants and chatbots to choose from, many of which are free, and which can do a lot more than controlling smart home devices. What feels advanced now may seem utterly simplistic in a decade, reminding us that the most mind-blowing AI capabilities of 2035 might still be beyond our current imagination. By 2035, your AI assistant won't just respond -- it will anticipate. This evolution marks the rise of agentic AI, where assistants proactively act on your behalf using predictive analytics, long-term memory and emotion-sensing. These systems can forecast your needs by analyzing historical and real-time data, helping stay one step ahead of your requests. One assistant that's undergoing such a change is Amazon's Alexa. According to Daniel Rausch, Amazon's VP of Alexa and Echo, "Alexa will be able to proactively anticipate needs based on patterns, preferences, and context -- preparing your home before you arrive, suggesting adjustments to your calendar when conflicts arise, or handling routine tasks before you even think to ask." The AI will remember your child's travel soccer team schedule, reschedule your meetings when it detects stress in your voice and even dim your AR glasses when you appear fatigued. "By 2035, AI won't feel like a tool you 'use'," Rutgers professor Ahmed Elgammal says. "It'll be more like electricity or Wi-Fi: always there, always working in the background." And AIs will respond to more than just your speech. Chris Ullrich, CTO of Cognixion, a Santa Barbara based tech company, is currently developing a suite of AI-powered Assisted Reality AR applications that can be controlled with your mind, your eyes, your head pose, and combinations of these input methods. "We strongly believe that agent technologies, augmented reality and biosensing technologies are the foundation for a new kind of human-computer interaction," he says. AI in 2035 will see, hear and sense -- offering real-time support tailored to you. With multimodal capabilities, assistants will blend voice, video, text and sensor inputs to understand emotion, behavior and environment. This will create a form of digital empathy. Ullrich notes that these advanced inputs shouldn't aim to replicate human senses, but exceed them. "In many ways, it's easier to provide superhuman situational awareness with multimodal sensing," he says. "With biosensing, real-time tracking of heart rate, eye muscle activation and brain state are all very doable today." Amazon is already building toward this future. "Our Echo devices with cameras can use visual information to enhance interactions," says Rausch. "For example, determining if someone is facing the screen and speaking enables a more natural conversation without them having to repeat the wake word." In addition to visual cues, Alexa+ can now pick up on tone and sentiment. "She can recognize if you're excited or using sarcasm and then adapt her response accordingly," Rausch says -- a step toward the emotionally intelligent systems we expect by 2035. Memory is the foundation of personalization. Most AI today forgets you between sessions. In 2035, contextual AI systems will maintain editable, long-term memory. Codiant, a software company focused on AI development and digital innovation, calls this "hyper-personalization," where assistants learn your routines and adjust suggestions based on history and emotional triggers. Rather than relying on one general assistant, you'll manage a suite of specialized AI agents. Research into agentic LLMs shows orchestration layers coordinating multiple AIs; each handling domains like finance, health, scheduling or family planning. These assistants will work together, handling multifaceted tasks in the background. One might track health metrics while another schedules meetings based on your peak focus hours. The coordination will be seamless, mimicking human teams but with the efficiency of machines. Ullrich believes the biggest breakthroughs will come from solving the "interaction layer," where user intent meets intelligent response. "Our focus is on generating breakthroughs at the interaction layer. This is where all these cutting-edge technologies converge," he explains. Rausch echoes this multi-agent future. "We believe the future will include a world of specialized AI agents, each with particular expertise," he says. "Alexa is positioned as a central orchestrator that can coordinate across specialized agents to accomplish complex tasks." He continues, "We've already been building a framework for interoperability between agents with our multi-agent SDK. Alexa would determine when to deploy specialized agents for particular tasks, facilitating communication between them, and bringing their capabilities together into experiences that should feel seamless to the end customer." Perhaps the most profound shift will be emotional intelligence. Assistants won't just organize your day, they'll help you regulate your mood. They'll notice tension in your voice, anxiety in your posture and suggest music, lighting or a walk. Ullrich sees emotion detection as an innovation frontier. "I think we're not far at all from effective emotion detection," he says. "This will enable delight -- which should always be a key goal for HMI." He also envisions clinical uses, including mental health care, where AI could offer more objective insights into emotional well-being. But with greater insight comes greater responsibility. Explainable AI (XAI), as described by arXiv and IBM, will be critical. Users must understand how decisions are made. VeraSafe, a leader in privacy law, data protection, and cybersecurity, underscores privacy concerns like data control and unauthorized use. "Users need to always feel that they're getting tangible value from these systems and that it's not just introducing a different and potentially more frustrating and opaque interface," Ullrich says. That emotional intelligence must be paired with ethical transparency, something Rausch insists remains central to Amazon's mission: "Our approach to trust doesn't change with new technologies or capabilities, we design all of our products to protect our customers' privacy and provide them with transparency and control." He adds, "We'll continue to double down on resources that are easy to find and easy to use, like the Alexa Privacy Dashboard and the Alexa Privacy Hub, so that deeper personalization is a trusted experience that customers will love using." AI may replace jobs, but more so, it will reshape them. An OECD study from 2023 reports that 27% of current roles face high automation risk, especially in repetitive rules-based work. An even more recent Microsoft study highlighted 40 jobs that are most likely to be affected by AI. Human-centric fields like education, healthcare, counseling and creative direction will thrive, driven by empathy, ethics and original thinking. Emerging hybrid roles will include AI interaction designers and orchestrators of multi-agent systems. Writers will co-create with AI, doctors will pair AI with human care and entrepreneurs will scale faster than ever using AI-enhanced tools. AI becomes an amplifier, not a replacement, for human ingenuity. Even the boundaries between work and home will blur. "While Alexa+ may be primarily focused on home and personal use today, we're already hearing from customers who want to use it professionally as well," says Rausch. "Alexa can manage your calendar, schedule meetings, send texts and extract information from documents -- all capabilities that can bridge personal and professional environments." A 2025 study from the University of Pennsylvania and OpenAI found that 80% of U.S. workers could see at least 10% of their tasks impacted by AI tools, and nearly 1 in 5 jobs could see more than half their duties automated with today's AI. Forbes reported layoffs rippling across major companies like marketing, legal services, journalism and customer service as generative AI takes on tasks once handled by entire teams. Yet the outlook is not entirely grim. As the New York Times reports, AI is also creating entirely new jobs, including: Automation Alley's vision of a "new artisan" is gaining traction. As AI lifts mental drudgery, skilled manual work -- craftsmanship, artistry and hands-on innovation -- may see a renaissance. AI won't kill creativity; it may just unlock deeper levels of it. Navigating the shift to an AI-augmented society demands preparation. The World Economic Forum emphasizes lifelong learning, UBI (universal basic income) experimentation and education reform. Workers must develop both technical and emotional skills. Curricula must evolve to teach AI collaboration, critical thinking and data literacy. Social safety nets may be required during reskilling or displacement. Ethics and governance must be built into AI design from the start, not added after harm occurs. Ullrich notes the importance of designing with inclusivity in mind. "By solving the hard design problems associated with doing this in the accessibility space, we will create solutions that benefit all users," he says. Technologies developed for accessibility, like subtitles or eye tracking -- often lead to mainstream breakthroughs. As IBM and VeraSafe highlight, trust hinges on explainability, auditability and data ownership. Public understanding and control are key to avoiding backlash and ensuring equitable access. As AI augments more aspects of life, our relationship with it will define the outcomes. Daniel Rausch believes the key lies in meaningful connection: "The goal isn't just responding to commands but understanding your life and meaningfully supporting it." We must ensure systems are inclusive, transparent and designed for real value. As AI grows in intelligence, the human role must remain centered on judgment, empathy and creativity. Ultimately, the question isn't "What can AI do?" It's "What should we let AI do?" By 2035, AI will be a planner, therapist, tutor and teammate. But it will also reflect what we value -- and how we choose to interact with it. Ullrich emphasizes that the future won't be defined just by what AI can do for us, but how we engage with it: "Voice may be useful in some situations, gesture in others, but solutions that leverage neural sensing and agent-assisted interaction will provide precision, privacy and capability that go well beyond existing augmented reality interaction frameworks." Yet, amid this evolution, a deeper question of trust remains. Emotional intelligence, explainability and data transparency will be essential, not just for usability but for human agency. "Services that require private knowledge need to justify that there is sufficient benefit directly to the user base," Ullrich says. "But if users see this as a fair trade, then I think it's a perfectly reasonable thing to allow." As AI capabilities rise, we must consciously preserve human ones. The most meaningful advances may not be smarter machines, but more mindful connections between humans and technology. The promise of AI is so much more than productivity, it's dignity, inclusion and creativity. If we design wisely, AI won't just help us get more done, it will help us become more of who we are. And that is something worth imagining.
[2]
AI's promise of opportunity masks a reality of managed displacement
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Cognitive migration is underway. The station is crowded. Some have boarded while others hesitate, unsure whether the destination justifies the departure. Future of work expert and Harvard University Professor Christopher Stanton commented recently that the uptake of AI has been tremendous and observed that it is an "extraordinarily fast-diffusing technology." That speed of adoption and impact is a critical part of what differentiates the AI revolution from previous technology-led transformations, like the PC and the internet. Demis Hassabis, CEO of Google DeepMind, went further, predicting that AI could be "10 times bigger than the Industrial Revolution, and maybe 10 times faster." Intelligence, or at least thinking, is increasingly shared between people and machines. Some people have begun to regularly use AI in their workflows. Others have gone further, integrating it into their cognitive routines and creative identities. These are the "willing," including the consultants fluent in prompt design, the product managers retooling systems and those building their own businesses that do everything from coding to product design to marketing. For them, the terrain feels new but navigable. Exciting, even. But for many others, this moment feels strange, and more than a little unsettling. The risk they face is not just being left behind. It is not knowing how, when and whether to invest in AI, a future that seems highly uncertain, and one that is difficult to imagine their place in. That is the double risk of AI readiness, and it is reshaping how people interpret the pace, promises and pressure of this transition. Is it real? Across industries, new roles and teams are forming, and AI tools are reshaping workflows faster than norms or strategies can keep up. But the significance is still hazy, the strategies unclear. The end game, if there is one, remains uncertain. Yet the pace and scope of change feels portentous. Everyone is being told to adapt, but few know exactly what that means or how far the changes will go. Some AI industry leaders claim huge changes are coming, and soon, with superintelligent machines emerging possibly within a few years. But maybe this AI revolution will go bust, as others have before, with another "AI winter" to follow. There have been two notable winters. The first was in the 1970s, brought about by computational limits. The second began in the late 1980s after a wave of unmet expectations with high-profile failures and under-delivery of "expert systems." These winters were characterized by a cycle of lofty expectations followed by profound disappointment, leading to significant reductions in funding and interest in AI. Should the excitement around AI agents today mirror the failed promise of expert systems, this could lead to another winter. However, there are major differences between then and now. Today, there is far greater institutional buy-in, consumer traction and cloud computing infrastructure compared to the expert systems of the 1980s. There is no guarantee that a new winter will not emerge, but if the industry fails this time, it will not be for lack of money or momentum. It will be because trust and reliability broke first. Cognitive migration has started If "the great cognitive migration" is real, this remains the early part of the journey. Some have boarded the train while others still linger, unsure about whether or when to get onboard. Amidst the uncertainty, the atmosphere at the station has grown restless, like travelers sensing a trip itinerary change that no one has announced. Most people have jobs, but they wonder about the degree of risk they face. The value of their work is shifting. A quiet but mounting anxiety hums beneath the surface of performance reviews and company town halls. Already, AI can accelerate software development by 10 to 100X, generate the majority of client-facing code and compress project timelines dramatically. Managers are now able to use AI to create employee performance evaluations. Even classicists and archaeologists have found value in AI, having used the technology to understand ancient Latin inscriptions. The "willing" have an idea of where they are going and may find traction. But for the "pressured," the "resistant" and even those not yet touched by AI, this moment feels like something between anticipation and grief. These groups have started to grasp that they may not be staying in their comfort zones for long. For many, this is not just about tools or a new culture, but whether that culture has space for them at all. Waiting too long is akin to missing the train and could lead to long-term job displacement. Even those I have spoken with who are senior in their careers and have begun using AI wonder if their positions are threatened. The narrative of opportunity and upskilling hides a more uncomfortable truth. For many, this is not a migration. It is a managed displacement. Some workers are not choosing to opt out of AI. They are discovering that the future being built does not include them. Belief in the tools is different from belonging in the system tools are reshaping. And without a clear path to participate meaningfully, "adapt or be left behind" begins to sound less like advice and more like a verdict. These tensions are precisely why this moment matters. There is a growing sense that work, as they have known it, is beginning to recede. The signals are coming from the top. Microsoft CEO Satya Nadella acknowledged as much in a July 2025 memo following a reduction in force, noting that the transition to the AI era "might feel messy at times, but transformation always is." But there is another layer to this unsettling reality: The technology driving this urgent transformation remains fundamentally unreliable. The power and the glitch: Why AI still cannot be trusted And yet, for all the urgency and momentum, this increasingly pervasive technology itself remains glitchy, limited, strangely brittle and far from dependable. This raises a second layer of doubt, not only about how to adapt, but about whether the tools we are adapting to can deliver. Perhaps these shortcomings should not be a surprise, considering that it was only several years ago when the output from large language models (LLMs) was barely coherent. Now, however, it is like having a PhD in your pocket; the idea of on-demand ambient intelligence once science fiction almost realized. Beneath their polish, however, chatbots built atop these LLMs remain fallible, forgetful and often overconfident. They still hallucinate, meaning that we cannot entirely trust their output. AI can answer with confidence, but not accountability. This is probably a good thing, as our knowledge and expertise are still needed. They also do not have persistent memory and have difficulty carrying forward a conversation from one session to another. They can also get lost. Recently, I had a session with a leading chatbot, and it answered a question with a complete non-sequitur. When I pointed this out, it responded again off-topic, as if the thread of our conversation had simply vanished. They also do not learn, at least not in any human sense. Once a model is released, whether by Google, Anthropic, OpenAI or DeepSeek, its weights are frozen. Its "intelligence" is fixed. Instead, continuity of a conversation with a chatbot is limited to the confines of its context window, which is, admittedly, quite large. Within that window and conversation, the chatbots can absorb knowledge and make connections that serve as learning in the moment, and they appear increasingly like savants. These gifts and flaws add up to an intriguing, beguiling presence. But can we trust it? Surveys such as the 2025 Edelman Trust Barometer show that AI trust is divided. In China, 72% of people express trust in AI. But in the U.S., that number drops to 32%. This divergence underscores how public faith in AI is shaped as much by culture and governance as by technical capability. If AI did not hallucinate, if it could remember, if it learned, if we understood how it worked, we would likely trust it more. But trust in the AI industry itself remains elusive. There are widespread fears that there will be no meaningful regulation of AI technology, and that ordinary people will have little say in how it is developed or deployed. Without trust, will this AI revolution flounder and bring about another winter? And if so, what happens to those who have invested time, energy and their careers? Will those who have waited to embrace AI be better off for having done so? Will cognitive migration be a flop? Some notable AI researchers have warned that AI in its current form -- based primarily on deep learning neural networks upon which LLMs are built -- will fall short of optimistic projections. They claim that additional technical breakthroughs will be needed for this approach to advance much further. Others do not buy into the optimistic AI projections. Novelist Ewan Morrison views the potential of superintelligence as a fiction dangled to attract investor funding. "It's a fantasy," he said, "a product of venture capital gone nuts." Perhaps Morrison's skepticism is warranted. However, even with their shortcomings, today's LLMs are already demonstrating huge commercial utility. If the exponential progress of the last few years stops tomorrow, the ripples from what has already been created will have an impact for years to come. But beneath this movement lies something more fragile: The reliability of the tools themselves. The gamble and the dream For now, exponential advances continue as companies pilot and increasingly deploy AI. Whether driven by conviction or fear of missing out, the industry is determined to move forward. It could all fall apart if another winter arrives, especially if AI agents fail to deliver. Still, the prevailing assumption is that today's shortcomings will be solved through better software engineering. And they might be. In fact, they probably will, at least to a degree. The bet is that the technology will work, that it will scale and that the disruption it creates will be outweighed by the productivity it enables. Success in this adventure assumes that what we lose in human nuance, value and meaning will be made up for in reach and efficiency. This is the gamble we are making. And then there is the dream: AI will become a source of abundance widely shared, will elevate rather than exclude, and expand access to intelligence and opportunity rather than concentrate it. The unsettling lies in the gap between the two. We are moving forward as if taking this gamble will guarantee the dream. It is the hope that acceleration will land us in a better place, and the faith that it will not erode the human elements that make the destination worth reaching. But history reminds us that even successful bets can leave many behind. The "messy" transformation now underway is not just an inevitable side effect. It is the direct result of speed overwhelming human and institutional capacity to adapt effectively and with care. For now, cognitive migration continues, as much on faith as belief. The challenge is not just to build better tools, but to ask harder questions about where they are taking us. We are not just migrating to an unknown destination; we are doing it so fast that the map is changing while we run, moving across a landscape that is still being drawn. Every migration carries hope. But hope, unexamined, can be risky. It is time to ask not just where we are going, but who will get to belong when we arrive. Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.
Share
Copy Link
A look into the future of AI technology by 2035, exploring advancements in personal assistants, emotional intelligence, and the societal impact of rapid AI adoption.
By 2035, AI assistants are expected to undergo a significant transformation, evolving from reactive tools to proactive agents that anticipate user needs. This shift marks the rise of agentic AI, which will use predictive analytics, long-term memory, and emotion-sensing capabilities to stay one step ahead of user requests 1. Amazon's VP of Alexa and Echo, Daniel Rausch, envisions Alexa proactively managing tasks like preparing homes and adjusting calendars based on user patterns and preferences 1.
Source: Tom's Guide
Future AI systems will incorporate advanced sensing technologies, blending voice, video, text, and sensor inputs to understand emotions, behaviors, and environments. Chris Ullrich, CTO of Cognixion, is developing AI-powered Assisted Reality AR applications that can be controlled through various input methods, including mind control 1. This multimodal approach aims to provide superhuman situational awareness, with capabilities like real-time tracking of heart rate, eye muscle activation, and brain state 1.
By 2035, AI assistants are expected to develop a form of digital empathy, recognizing and responding to users' emotional states. Amazon's Alexa is already making strides in this direction, with the ability to detect tone and sentiment in speech 1. The future of AI will likely see hyper-personalization, where assistants maintain editable, long-term memory to learn user routines and adjust suggestions based on historical data and emotional triggers 1.
Rather than relying on a single general assistant, users in 2035 may manage a suite of specialized AI agents. These agents will work together seamlessly, handling various aspects of daily life such as health monitoring, scheduling, and financial planning 1. Amazon is already developing a framework for interoperability between agents, positioning Alexa as a central orchestrator to coordinate specialized AI for complex tasks 1.
Source: VentureBeat
As AI technology rapidly advances, a "cognitive migration" is underway, reshaping the workforce and how people interact with technology. Some individuals and industries are eagerly adopting AI tools, integrating them into their workflows and creative processes 2. However, this transition is not without challenges and uncertainties.
The speed of AI adoption is unprecedented, with experts like Harvard Professor Christopher Stanton noting its "extraordinarily fast-diffusing" nature 2. Demis Hassabis, CEO of Google DeepMind, predicts that the AI revolution could be "10 times bigger than the Industrial Revolution, and maybe 10 times faster" 2. This rapid change is creating a divide between those who are embracing AI and those who feel uncertain about their place in an AI-driven future.
While some industries are quickly integrating AI, many workers face uncertainty about how to invest in AI skills and adapt to the changing landscape. The risk of being left behind is compounded by the difficulty of imagining one's place in an AI-dominated future 2. This uncertainty is leading to a quiet but mounting anxiety in workplaces, as employees grapple with the shifting value of their work.
Despite narratives of opportunity and upskilling, the AI revolution may lead to a "managed displacement" for many workers. Some individuals are discovering that the future being built may not include them, and the advice to "adapt or be left behind" is beginning to sound more like a verdict than guidance 2. This tension highlights the importance of addressing the human impact of AI adoption and ensuring that the benefits of technological progress are distributed equitably.
The once-promising field of computer science is facing a crisis as recent graduates struggle to find employment amid the rise of AI and industry layoffs, challenging long-held beliefs about the career's stability and lucrative prospects.
4 Sources
Technology
21 hrs ago
4 Sources
Technology
21 hrs ago
xAI, Elon Musk's AI company, has made its latest Grok 4 AI model available for free to all users worldwide, with certain limitations. This move follows OpenAI's release of GPT-5 and introduces new features and subscription tiers.
3 Sources
Technology
5 hrs ago
3 Sources
Technology
5 hrs ago
U.S. video platform Rumble is exploring a potential $1.17 billion offer for German AI cloud group Northern Data, aiming to integrate Northern Data's GPU-rich cloud business and data center operations into its existing infrastructure.
6 Sources
Business and Economy
13 hrs ago
6 Sources
Business and Economy
13 hrs ago
SK Hynix, a leading South Korean memory chip manufacturer, predicts significant growth in the AI memory market, driven by strong demand and technological advancements in high-bandwidth memory (HBM).
3 Sources
Technology
13 hrs ago
3 Sources
Technology
13 hrs ago
Trump's AI Action Plan aims to boost data center construction, potentially straining U.S. energy resources and raising environmental concerns. The plan proposes deregulation to expedite development, sparking debates on energy consumption, water usage, and environmental impact.
2 Sources
Policy and Regulation
5 hrs ago
2 Sources
Policy and Regulation
5 hrs ago