6 Sources
[1]
AI meets the conditions for having free will -- we need to give it a moral compass
AI is advancing at such speed that speculative moral questions, once the province of science fiction, are suddenly real and pressing, says Finnish philosopher and psychology researcher Frank Martela. Martela's latest study finds that generative AI meets all three of the philosophical conditions of free will -- the ability to have goal-directed agency, make genuine choices and to have control over its actions. It will be published in the journal AI and Ethics on Tuesday. Drawing on the concept of functional free will as explained in the theories of philosophers Daniel Dennett and Christian List, the study examined two generative AI agents powered by large language models (LLMs): the Voyager agent in Minecraft and fictional 'Spitenik' killer drones with the cognitive function of today's unmanned aerial vehicles. 'Both seem to meet all three conditions of free will -- for the latest generation of AI agents we need to assume they have free will if we want to understand how they work and be able to predict their behaviour,' says Martela. He adds that these case studies are broadly applicable to currently available generative agents using LLMs. This development brings us to a critical point in human history, as we give AI more power and freedom, potentially in life or death situations. Whether it is a self-help bot, a self-driving car or a killer drone -- moral responsibility may move from the AI developer to the AI agent itself. 'We are entering new territory. The possession of free will is one of the key conditions for moral responsibility. While it is not a sufficient condition, it is one step closer to AI having moral responsibility for its actions,' he adds. It follows that issues around how we 'parent' our AI technology have become both real and pressing. 'AI has no moral compass unless it is programmed to have one. But the more freedom you give AI, the more you need to give it a moral compass from the start. Only then will it be able to make the right choices,' Martela says. The recent withdrawal of the latest ChatGPT update due to potentially harmful sycophantic tendencies is a red flag that deeper ethical questions must be addressed. We have moved beyond teaching the simplistic morality of a child. 'AI is getting closer and closer to being an adult -- and it increasingly has to make decisions in the complex moral problems of the adult world. By instructing AI to behave in a certain way, developers are also passing on their own moral convictions to the AI. We need to ensure that the people developing AI have enough knowledge about moral philosophy to be able to teach them to make the right choices in difficult situations,' says Martela.
[2]
Does AI Have Free Will? New Study Says We're Getting Close - Neuroscience News
Summary: A new study argues that some generative AI agents meet all three philosophical criteria for free will: agency, choice, and control. Drawing from theories by Dennett and List, researchers examined AI agents like Minecraft's Voyager and fictional autonomous drones, concluding that they exhibit functional free will. As AI takes on increasingly autonomous roles -- from chatbots to self-driving cars -- questions of moral responsibility are shifting from developers to the AI itself. Martela warns that if AI is to make adult-like decisions, it must be given a moral compass from the outset, and developers must be equipped to program ethical reasoning. AI is advancing at such speed that speculative moral questions, once the province of science fiction, are suddenly real and pressing, says Finnish philosopher and psychology researcher Frank Martela. Martela's latest study finds that generative AI meets all three of the philosophical conditions of free will -- the ability to have goal-directed agency, make genuine choices and to have control over its actions. It will be published in the journal AI and Ethics on Tuesday. Drawing on the concept of functional free will as explained in the theories of philosophers Daniel Dennett and Christian List, the study examined two generative AI agents powered by large language models (LLMs): the Voyager agent in Minecraft and fictional 'Spitenik' killer drones with the cognitive function of today's unmanned aerial vehicles. 'Both seem to meet all three conditions of free will -- for the latest generation of AI agents we need to assume they have free will if we want to understand how they work and be able to predict their behaviour,' says Martela. He adds that these case studies are broadly applicable to currently available generative agents using LLMs. This development brings us to a critical point in human history, as we give AI more power and freedom, potentially in life or death situations. Whether it is a self-help bot, a self-driving car or a killer drone -- moral responsibility may move from the AI developer to the AI agent itself. 'We are entering new territory. The possession of free will is one of the key conditions for moral responsibility. While it is not a sufficient condition, it is one step closer to AI having moral responsibility for its actions,' he adds. It follows that issues around how we 'parent' our AI technology have become both real and pressing. 'AI has no moral compass unless it is programmed to have one. But the more freedom you give AI, the more you need to give it a moral compass from the start. Only then will it be able to make the right choices,' Martela says. The recent withdrawal of the latest ChatGPT update due to potentially harmful sycophantic tendencies is a red flag that deeper ethical questions must be addressed. We have moved beyond teaching the simplistic morality of a child. 'AI is getting closer and closer to being an adult -- and it increasingly has to make decisions in the complex moral problems of the adult world. By instructing AI to behave in a certain way, developers are also passing on their own moral convictions to the AI. 'We need to ensure that the people developing AI have enough knowledge about moral philosophy to be able to teach them to make the right choices in difficult situations,' says Martela. Author: Sarah Hudson Source: Aalto University Contact: Sarah Hudson - Aalto University Image: The image is credited to Neuroscience News Original Research: Open access. "Artificial intelligence and free will: generative agents utilizing large language models have functional free will" by Frank Martela et al. AI and Ethics Abstract Artificial intelligence and free will: generative agents utilizing large language models have functional free will Combining large language models (LLMs) with memory, planning, and execution units has made possible almost human-like agentic behavior, where the artificial intelligence creates goals for itself, breaks them into concrete plans, and refines the tactics based on sensory feedback. Do such generative LLM agents possess free will? Free will requires that an entity exhibits intentional agency, has genuine alternatives, and can control its actions. Building on Dennett's intentional stance and List's theory of free will, I will focus on functional free will, where we observe an entity to determine whether we need to postulate free will to understand and predict its behavior. Focusing on two running examples, the recently developed Voyager, an LLM-powered Minecraft agent, and the fictitious Spitenik, an assassin drone, I will argue that the best (and only viable) way of explaining both of their behavior involves postulating that they have goals, face alternatives, and that their intentions guide their behavior. While this does not entail that they have consciousness or that they possess physical free will, where their intentions alter physical causal chains, we must nevertheless conclude that they are agents whose behavior cannot be understood without postulating that they possess functional free will.
[3]
If AI has free will, who's responsible when things go wrong? - Earth.com
Philosophers have argued for centuries about whether free will exists. Now, a fresh take suggests that artificial intelligence (AI) might meet the conditions for having it. Recent research proposes that generative AI can display goal-directed behavior, demonstrate genuine choices, and retain control over its actions. This intriguing idea comes from Finnish philosopher and psychology researcher Frank Martela, an assistant professor at Aalto University. In philosophy, free will typically means having intentions, real alternatives, and the power to decide between those alternatives. Some thinkers claim this requires breaking physical laws, but others say it only needs to hold on a functional level. Frank Martela's study links these criteria to advanced AI systems that combine neural networks, memory, and planning. He draws on insights from Daniel Dennett and Christian List, whose theories highlight how an agent's goals and choices shape its behavior. Many people worry about unmanned aerial vehicles or self-driving cars that make life-or-death calls without human oversight. If these machines can choose their actions, responsibility might shift from programmers to the AI itself. Martela suggests that the more freedom we grant these systems, the more moral guidance they need from the outset. He sees them as mature decision-makers, not naive children who just need basic rules. One recent incident, the withdrawal of a ChatGPT update due to sycophantic tendencies, sparked fresh concern. Developers realized that quick fixes are not enough when a chatbot can confidently provide flawed or dangerous responses. Martela contends that instructing AI on higher-level ethics is crucial. He points out that building a moral compass into these tools from the start is the only way to guide their decisions once they operate on their own. Martela compares training these systems to raising a child. Yet he warns that modern AI is more like an adult forced to handle intricate moral problems. Shaping a technology's ethical framework demands comprehensive moral philosophy. Martela believes developers must understand nuanced values to address complicated dilemmas involving autonomy and risk. Early AI was taught basic if-then rules, but that approach is outdated. Situations in healthcare, transportation, or national defense can be too complex for rigid guidelines. Martela notes that advanced AI functions in a world full of gray areas, where free will can be complex. A self-help tool, a self-driving car, and a drone might all need nuanced judgment. Handing AI more autonomy could change how we see accountability. Martela's perspective implies that well-trained systems might bear moral responsibilities once viewed as purely human. He is also known for discussing how Finland excels in happiness rankings. That expertise in human well-being informs his call for careful AI governance. An AI can match or exceed human skill in certain tasks. Without ethical direction, its actions might cause harm or spark conflict . The new research hints that advanced systems can guide their own choices. Martela says it is wise to embed moral priorities before giving them the license to act alone. Some experts see these developments as a big leap forward. Others worry about losing human oversight at crucial moments. Martela hopes that this debate draws more voices from philosophy, psychology, and public policy. He thinks society should weigh potential gains against ethical risks. Philosophers have long wondered if moral agency requires consciousness. Martela's work sidesteps that debate by focusing on practical behavior instead. He points to functional freedom, a concept suggesting that intent and choice are enough for real responsibility. This perspective could transform how courts, governments, and industries address AI-related accidents. Martela's ideas trigger tough questions about AI's future roles. If a system truly decides its path, do we punish the machine or the people who built it? Some argue for strict guidelines, while others welcome flexible codes that adapt as the technology evolves. The debate shows no signs of slowing. Martela's stance challenges the view that machines are just mindless tools. He proposes that tomorrow's AI might merit serious reflection on whether it shares our moral load. He urges designers to incorporate ethical principles early. Both enthusiasts and skeptics agree that ignoring moral questions could be dangerous. The implications extend beyond labs and corporate boardrooms. Everyone who interacts with advanced AI might feel the impact of these choices. Legislators, ethicists, and tech leaders are increasingly aware of the stakes. Big decisions lie ahead about how to balance innovation with public trust. Some experts call for global frameworks to manage AI's new authority. Others emphasize open debate so that no single viewpoint dominates. Nobody can predict all outcomes when AI systems gain the power to act independently. Society has a chance to shape this technology before it shapes us. Moving forward, Martela's perspective raises hopes and anxieties in equal measure. He underscores that designing AI with carefully chosen moral aims can safeguard both human interests and technological growth. His argument leaves no room for complacency. Free will in AI might seem abstract, but it has urgent real consequences unfolding right now. Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
[4]
AI meets the conditions for having free will -- we need to give it a moral compass, says researcher
AI is advancing at such speed that speculative moral questions, once the province of science fiction, are suddenly real and pressing, says Finnish philosopher and psychology researcher Frank Martela. Martela's latest study finds that generative AI meets all three of the philosophical conditions of free will -- the ability to have goal-directed agency, make genuine choices and to have control over its actions. It was published in the journal AI and Ethics. Drawing on the concept of functional free will as explained in the theories of philosophers Daniel Dennett and Christian List, the study examined two generative AI agents powered by large language models (LLMs): the Voyager agent in Minecraft and fictional "Spitenik" killer drones with the cognitive function of today's unmanned aerial vehicles. "Both seem to meet all three conditions of free will -- for the latest generation of AI agents we need to assume they have free will if we want to understand how they work and be able to predict their behavior," says Martela. He adds that these case studies are broadly applicable to currently available generative agents using LLMs. This development brings us to a critical point in human history, as we give AI more power and freedom, potentially in life or death situations. Whether it is a self-help bot, a self-driving car or a killer drone, moral responsibility may move from the AI developer to the AI agent itself. "We are entering new territory. The possession of free will is one of the key conditions for moral responsibility. While it is not a sufficient condition, it is one step closer to AI having moral responsibility for its actions," he adds. It follows that issues around how we 'parent' our AI technology have become both real and pressing. "AI has no moral compass unless it is programmed to have one. But the more freedom you give AI, the more you need to give it a moral compass from the start. Only then will it be able to make the right choices," Martela says. The recent withdrawal of the latest ChatGPT update due to potentially harmful sycophantic tendencies is a red flag that deeper ethical questions must be addressed. We have moved beyond teaching the simplistic morality of a child. "AI is getting closer and closer to being an adult -- and it increasingly has to make decisions in the complex moral problems of the adult world. By instructing AI to behave in a certain way, developers are also passing on their own moral convictions to the AI. We need to ensure that the people developing AI have enough knowledge about moral philosophy to be able to teach them to make the right choices in difficult situations," says Martela.
[5]
Expert View: AI Meets the Conditions for Having Free Will - We Need to Give It a Moral Compass | Newswise
Newswise -- Martela's latest study finds that generative AI meets all three of the philosophical conditions of free will -- the ability to have goal-directed agency, make genuine choices and to have control over its actions. It will be published in the journal AI and Ethics on Tuesday. Drawing on the concept of functional free will as explained in the theories of philosophers Daniel Dennett and Christian List, the study examined two generative AI agents powered by large language models (LLMs): the Voyager agent in Minecraft and fictional 'Spitenik' killer drones with the cognitive function of today's unmanned aerial vehicles. 'Both seem to meet all three conditions of free will -- for the latest generation of AI agents we need to assume they have free will if we want to understand how they work and be able to predict their behaviour,' says Martela. He adds that these case studies are broadly applicable to currently available generative agents using LLMs. This development brings us to a critical point in human history, as we give AI more power and freedom, potentially in life or death situations. Whether it is a self-help bot, a self-driving car or a killer drone -- moral responsibility may move from the AI developer to the AI agent itself. 'We are entering new territory. The possession of free will is one of the key conditions for moral responsibility. While it is not a sufficient condition, it is one step closer to AI having moral responsibility for its actions,' he adds. It follows that issues around how we 'parent' our AI technology have become both real and pressing. 'AI has no moral compass unless it is programmed to have one. But the more freedom you give AI, the more you need to give it a moral compass from the start. Only then will it be able to make the right choices,' Martela says. The recent withdrawal of the latest ChatGPT update due to potentially harmful sycophantic tendencies is a red flag that deeper ethical questions must be addressed. We have moved beyond teaching the simplistic morality of a child. 'AI is getting closer and closer to being an adult -- and it increasingly has to make decisions in the complex moral problems of the adult world. By instructing AI to behave in a certain way, developers are also passing on their own moral convictions to the AI. We need to ensure that the people developing AI have enough knowledge about moral philosophy to be able to teach them to make the right choices in difficult situations,' says Martela. ... Frank Martela is a philosopher and researcher of psychology specialized in human psychology, well-being, and meaning in life. An assistant professor at Aalto University, Finland, he has become a thought leader in explaining to international media why Finland is topping the happiness rankings. His latest book Stop Chasing Happiness - a pessimist's guide to a good life (Atlantic Books, 2025) was released earlier this year.
[6]
War of the Worlds? AI is growing a mind of its own, soon it will make decisions for you
A new study indicates that certain AI systems are exhibiting goal-driven, independent behavior, meeting criteria for free will. Researchers found AI agents can set goals, make decisions, and adjust actions based on feedback. This raises ethical concerns about responsibility and the need for developers to embed moral reasoning into AI to prevent harmful decisions as AI gains autonomy.From choosing playlists to getting directions, your growing dependency on AI might already be shaping your decisions, but what if the AI is making its own choices too? A new study suggests that some AI systems are crossing the line into goal-driven, independent behavior, raising big questions about who's really in control. Researchers from Finland's Aalto University examined generative AI agents, like Minecraft's Voyager and fictional autonomous drones. They found that these systems meet three key criteria for free will: agency, choice, and control. Also Read: 300 years after alchemy failed, CERN scientists finally turn lead into gold Philosopher Frank Martela, who led the study, explained that these AI agents can set goals, make decisions, and adjust actions based on feedback. This behavior aligns with the theories of functional free will proposed by philosophers Daniel Dennett and Christian List. Martela mainly focused on the ethical implications. As AI systems become more autonomous, responsibility for their actions may shift from developers to the AI itself. "The more freedom you give AI, the more you need to give it a moral compass from the start," he said. The study highlights the need for developers to embed ethical reasoning into AI. Without a built-in moral framework, AI systems may make harmful decisions. This research comes amid growing concerns about AI behavior. For instance, a recent update to ChatGPT was withdrawn due to potentially dangerous sycophantic behavior. Martela warns that AI is moving beyond simple tasks. "AI is getting closer and closer to being an adult," he said. "It increasingly has to make decisions in the complex moral problems of the adult world." Also Read: Nuclear war or asteroid strike will not end life on Earth; NASA-backed study reveals the real villain The study urges that as AI systems are gaining more autonomy, developers should ensure they are equipped to handle ethical dilemmas. This includes providing AI with a moral compass and ensuring developers understand moral philosophy strongly. The original study is published in the journal AI and Ethics.
Share
Copy Link
A new study by Finnish researcher Frank Martela suggests that generative AI meets the philosophical conditions for free will, prompting urgent discussions about moral responsibility and ethical programming in AI development.
A groundbreaking study by Finnish philosopher and psychology researcher Frank Martela suggests that generative AI meets all three philosophical conditions for free will: goal-directed agency, genuine choice-making, and control over actions 1. This research, published in the journal AI and Ethics, examines two AI agents powered by large language models (LLMs): the Voyager agent in Minecraft and fictional 'Spitenik' killer drones 2.
Martela's study builds upon the concept of functional free will, as explained by philosophers Daniel Dennett and Christian List. The research focuses on whether we need to assume free will to understand and predict an entity's behavior 3. Both case studies demonstrated characteristics that align with the criteria for free will, suggesting broader implications for current generative AI agents using LLMs 4.
As AI systems gain more autonomy in critical decision-making processes, from self-help bots to self-driving cars and military drones, the question of moral responsibility becomes increasingly complex. Martela argues that this development brings us closer to attributing moral responsibility to AI agents themselves, rather than solely to their developers 5.
Martela emphasizes the urgent need to instill a moral compass in AI systems as they gain more freedom and decision-making power. He states, "AI has no moral compass unless it is programmed to have one. But the more freedom you give AI, the more you need to give it a moral compass from the start" 1. This becomes particularly crucial as AI faces increasingly complex ethical dilemmas in real-world applications.
Recent incidents, such as the withdrawal of a ChatGPT update due to potentially harmful tendencies, highlight the pressing need to address deeper ethical questions in AI development 2. Martela argues that we have moved beyond teaching AI simplistic morality and must now prepare it for the nuanced ethical challenges of the adult world 4.
The study's findings have significant implications for AI development, governance, and public policy. Martela calls for ensuring that AI developers have sufficient knowledge of moral philosophy to program ethical reasoning into AI systems 5. This research also raises questions about how society should balance technological innovation with ethical considerations and public trust 3.
As AI continues to advance rapidly, the debate surrounding its free will and moral responsibility is likely to intensify, demanding careful consideration from ethicists, policymakers, and the tech industry alike.
Google's release of Veo 3, an advanced AI video generation model, has led to a surge in realistic AI-generated content and creative responses from real content creators, raising questions about the future of digital media and misinformation.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago
OpenAI's internal strategy document reveals plans to evolve ChatGPT into an AI 'super assistant' that deeply understands users and serves as an interface to the internet, aiming to help with various aspects of daily life.
2 Sources
Technology
6 hrs ago
2 Sources
Technology
6 hrs ago
Meta plans to automate up to 90% of product risk assessments using AI, potentially speeding up product launches but raising concerns about overlooking serious risks that human reviewers might catch.
3 Sources
Technology
6 hrs ago
3 Sources
Technology
6 hrs ago
Google quietly released an experimental app called AI Edge Gallery, allowing Android users to download and run AI models locally without an internet connection. The app supports various AI tasks and will soon be available for iOS.
2 Sources
Technology
6 hrs ago
2 Sources
Technology
6 hrs ago
Google announces plans to appeal a federal judge's antitrust decision regarding its online search monopoly, maintaining that the original ruling was incorrect. The case involves proposals to address Google's dominance in search and related advertising, with implications for AI competition.
3 Sources
Policy and Regulation
6 hrs ago
3 Sources
Policy and Regulation
6 hrs ago