2 Sources
[1]
Will politicians and terrorist leaders live forever in the age of AI?
Yahya Sinwar, the former leader of the Hamas militant organization, was killed by the Israeli military in the south Gazan city of Rafah in October 2024. Given the role Sinwar played in the planning and execution of the October 7 terrorist attack, as well as his role in the development of Hamas's military wing, his killing was seen as a possibly game-changing victory for the Israeli prime minister Benjamin Netanyahu. But, for all sides in the conflict, debate quickly turned to the consequences of his death. Would it change the political possibilities for a resolution to the war in Gaza? And would it transform him into a powerfully symbolic martyr inspiring new generations of militants? My research and teaching at Lancaster University develops what could be described as "war futurism." It explores the possible futures ahead of us in times that might be shaped in dramatic and unpredictable ways by AI, climate emergencies, space wars and the technological transformation of the "cyborg" body. In 2023, I wrote a book titled "Theorising Future Conflict: War Out to 2049." It included a fictional scenario involving a leader in a terrorist organization who was rumored to have been generated by AI as a means of producing a powerful figurehead for a group that was losing leaders to drone strikes. Sinwar's death prompted me to again think about what the age of generative AI tools might mean for strategic thinking and planning within organizations losing key figures. Will there soon be a situation in real life whereby dead leaders are replaced by AI tools that could produce virtual figures that circulate through deepfake videos and online interactions? And could they be used by members of the organization for strategic and political guidance? American cyberpunk author Rudy Rucker has written before about the possibility of producing what he calls a "lifebox", where a person could be simulated in digital worlds. Movies like the 2014 US science fiction thriller "Transcendence" have also explored the possibility of people being able to "upload" their consciousness into digital worlds. Rucker's idea is not so much about uploading consciousness. It is instead about creating the simulation of a person based on a large database on what they've written, done and said. In his 2021 novel titled "Juicy Ghosts," Rucker explores the ethical and economic problems that could result from people producing lifeboxes to live on after their deaths. These range from how you might pay for your digital "life" after death, and whether you would be able to control how your lifebox might be used. The era of digital immortality The possibility of an AI-assisted lifebox in the future isn't so far-fetched. Technological change is happening at a rapid pace and tools already exist that use AI for strategic planning and guidance. We already get a sense of the ethical, legal and strategic challenges that might be ahead of us in the concern surrounding the Israeli military's use of AI tools in the war in Gaza. In November, for example, the military claimed it was using an AI-based system called Habsora -- meaning "the Gospel" in English -- to "produce targets at a fast pace." It goes without saying that using AI to identify and track targets is vastly different to using it to create a digital leader. But, given the current speed of technological innovation, it's not implausible to imagine a leader generating a post-death AI identity in the future based on the history books that influenced them, the events they lived through, or the strategies and missions they were involved in. Emails and social media posts might also be used to train the AI as the simulation of the leader is being created. If the AI simulation works usefully and convincingly, we could arrive at a situation where it even becomes the leader of the organization. In some cases, deferring to the AI leader would make political sense given the way the non-human, virtual leader can be blamed for strategic or tactical mistakes. It could also be the case that the AI leader can think in ways that exceed human origin and will have greatly enhanced strategic, organizational and technical capacities and capabilities. This is a field that is already being considered by scientists. The Nobel Turing challenge initiative, for example, is working to develop an autonomous AI system that can carry out research worthy of winning the Nobel prize and beyond by 2050. A virtual political or terrorist leader is, of course, currently only a scenario from a cyberpunk film or novel. But how long will it be before we begin to see leaders experiment with the emerging possibilities of digital immortality? It may be the case that somewhere in the Kremlin one of the many projects being developed by Putin in preparation for his death is the exploration of an AI lifebox that could be used to guide Russian leaders that follow him. He could also be exploring technologies that will enable him to be "uploaded" into a new body at the time of his demise. This is probably not the case. But, notwithstanding, strategic AI tools are likely to be used in the future -- the question will be who gets to design and shape (and possibly inhabit) them. There are also likely to be limits on the political and organizational significance of dead leaders. Concerns may arise that hackers could manipulate and sabotage the AI leader. There will be a sense of uncertainty that the AI will be manipulated through operations to influence and subvert in a way that erases all trust in the digital "minds" that exist after death. There could be a concern that the AI is developing its own political and strategic desires. And it may well be the case that these attempts at AI immortality will be seen as an unnecessary and unhelpful obstruction by whoever replaces figures like Sinwar and Putin. The immortal leader might remain simply a technological fantasy of narcissistic politicians who want to live forever.
[2]
Will politicians and terrorist leaders live forever in the age of AI?
Lancaster University provides funding as a founding partner of The Conversation UK. Yahya Sinwar, the former leader of the Hamas militant organisation, was killed by the Israeli military in the south Gazan city of Rafah in October 2024. Given the role Sinwar played in the planning and execution of the October 7 terrorist attack, as well as his role in the development of Hamas's military wing, his killing was seen as a possibly game-changing victory for the Israeli prime minister Benjamin Netanyahu. But, for all sides in the conflict, debate quickly turned to the consequences of his death. Would it change the political possibilities for a resolution to the war in Gaza? And would it transform him into a powerfully symbolic martyr inspiring new generations of militants? My research and teaching at Lancaster University develops what could be described as "war futurism". It explores the possible futures ahead of us in times that might be shaped in dramatic and unpredictable ways by AI, climate emergencies, space wars and the technological transformation of the "cyborg" body. In 2023, I wrote a book titled Theorising Future Conflict: War Out to 2049. It included a fictional scenario involving a leader in a terrorist organisation who was rumoured to have been generated by AI as a means of producing a powerful figurehead for a group that was losing leaders to drone strikes. Sinwar's death prompted me to again think about what the age of generative AI tools might mean for strategic thinking and planning within organisations losing key figures. Will there soon be a situation in real life whereby dead leaders are replaced by AI tools that could produce virtual figures that circulate through deepfake videos and online interactions? And could they be used by members of the organisation for strategic and political guidance? American cyberpunk author Rudy Rucker has written before about the possibility of producing what he calls a "lifebox", where a person could be simulated in digital worlds. Movies like the 2014 US science fiction thriller Transcendence have also explored the possibility of people being able to "upload" their consciousness into digital worlds. Rucker's idea is not so much about uploading consciousness. It is instead about creating the simulation of a person based on a large database on what they've written, done and said. In his 2021 novel, Juicy Ghosts, Rucker explores the ethical and economic problems that could result from people producing lifeboxes to live on after their deaths. These range from how you might pay for your digital "life" after death, and whether you would you be able to control how your lifebox might be used. The era of digital immortality The possibility of an AI-assisted lifebox in the future isn't so far-fetched. Technological change is happening at a rapid pace and tools already exist that use AI for strategic planning and guidance. We already get a sense of the ethical, legal and strategic challenges that might be ahead of us in the concern surrounding the Israeli military's use of AI tools in the war in Gaza. In November, for example, the military claimed it was using an AI-based system called Habsora - meaning "the Gospel" in English - to "produce targets at a fast pace". It goes without saying that using AI to identify and track targets is vastly different to using it to create a digital leader. But, given the current speed of technological innovation, it's not implausible to imagine a leader generating a post-death AI identity in the future based on the history books that influenced them, the events they lived through, or the strategies and missions they were involved in. Emails and social media posts might also be used to train the AI as the simulation of the leader is being created. If the AI simulation works usefully and convincingly, we could arrive at a situation where it even becomes the leader of the organisation. In some cases, deferring to the AI leader would make political sense given the way the non-human, virtual leader can be blamed for strategic or tactical mistakes. It could also be the case that the AI leader can think in ways that exceed the human origin and will have greatly enhanced strategic, organisational and technical capacities and capabilities. This is a field that is already being considered by scientists. The Nobel Turing challenge initiative, for example, is working to develop an autonomous AI system that can carry out research worthy of winning the Nobel prize and beyond by 2050. A virtual political or terrorist leader is, of course, currently only a scenario from a cyberpunk film or novel. But how long will it be before we begin to see leaders experiment with the emerging possibilities of digital immortality? It may be the case that somewhere in the Kremlin one of the many projects being developed by Putin in preparation for his death is the exploration of an AI lifebox that could be used to guide Russian leaders that follow him. He could also be exploring technologies that will enable him to be "uploaded" into a new body at the time of his demise. This is probably not the case. But, notwithstanding, strategic AI tools are likely to be used in the future - the question will be who gets to design and shape (and possibly inhabit) them. There are also likely to be limits on the political and organisational significance of dead leaders. Concerns may arise that hackers could manipulate and sabotage the AI leader. There will be a sense of uncertainty that the AI will be manipulated through operations to influence and subvert in a way that erases all trust in the digital "minds" that exist after death. There could be a concern that the AI is developing its own political and strategic desires. And it may well be the case that these attempts at AI immortality will be seen as an unnecessary and unhelpful obstruction by whoever replaces figures like Sinwar and Putin. The immortal leader might remain simply a technological fantasy of narcissistic politicians who want to live forever.
Share
Copy Link
A thought-provoking exploration of how AI could potentially create virtual leaders, allowing political figures and terrorist leaders to 'live on' after death, and the implications this might have for future conflicts and governance.
In the rapidly evolving landscape of artificial intelligence, a provocative question has emerged: Could AI technology enable political and terrorist leaders to achieve a form of digital immortality? This concept, explored by researchers and science fiction authors alike, suggests the possibility of creating AI-powered virtual leaders based on the data, writings, and actions of deceased individuals 12.
The recent killing of Yahya Sinwar, former leader of Hamas, in October 2024, has reignited discussions about the future of leadership in conflict zones. While debates about the immediate consequences of his death continue, some researchers are looking further ahead, considering how AI might transform the very nature of leadership succession 1.
Dr. Mike Ryder, a researcher at Lancaster University, has been exploring "war futurism" - a field that examines potential future conflicts shaped by emerging technologies. In his book "Theorising Future Conflict: War Out to 2049," Ryder presents a scenario where a terrorist organization creates an AI-generated leader to replace those lost in drone strikes 12.
The concept of an "AI lifebox," popularized by cyberpunk author Rudy Rucker, involves creating a digital simulation of a person based on their writings, actions, and statements. This idea goes beyond simply uploading consciousness, instead focusing on recreating an individual's essence through data 12.
While the creation of fully autonomous AI leaders remains in the realm of speculation, the rapid pace of AI development suggests it may not be far off. Already, AI tools are being used for strategic planning and guidance in various fields. For instance, the Israeli military has reportedly employed an AI system called Habsora for target identification in the Gaza conflict 12.
The emergence of AI-generated leaders could have far-reaching consequences:
Strategic Advantage: Organizations might use these virtual leaders for enhanced decision-making and strategic planning 12.
Political Scapegoating: Non-human leaders could be blamed for tactical errors, potentially altering political dynamics 12.
Ethical and Security Concerns: Questions arise about the control and manipulation of these AI entities, including the potential for hacking or the development of independent goals 12.
Legal and Regulatory Challenges: The use of AI-generated leaders would likely necessitate new legal frameworks and international agreements 12.
While the concept of AI-generated leaders currently resides in the realm of science fiction, the rapid advancement of AI technology suggests that such scenarios may become plausible sooner than expected. As we move forward, it will be crucial to consider the ethical, legal, and strategic implications of these potential developments in the landscape of global leadership and conflict 12.
Summarized by
Navi
[2]
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
13 hrs ago
9 Sources
Technology
13 hrs ago
Google's Made by Google 2025 event showcases the Pixel 10 series, featuring advanced AI capabilities, improved hardware, and ecosystem integrations. The launch includes new smartphones, wearables, and AI-driven features, positioning Google as a strong competitor in the premium device market.
4 Sources
Technology
13 hrs ago
4 Sources
Technology
13 hrs ago
Palo Alto Networks reports impressive Q4 results and forecasts robust growth for fiscal 2026, driven by AI-powered cybersecurity solutions and the strategic acquisition of CyberArk.
6 Sources
Technology
13 hrs ago
6 Sources
Technology
13 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
21 hrs ago
6 Sources
Technology
21 hrs ago
President Trump's plan to deregulate AI development in the US faces a significant challenge from the European Union's comprehensive AI regulations, which could influence global standards and affect American tech companies' operations worldwide.
2 Sources
Policy
5 hrs ago
2 Sources
Policy
5 hrs ago