3 Sources
[1]
AI is about to radically alter military command structures that haven't changed much since Napoleon's army
American University School of International Service and American University provide funding as members of The Conversation US. Despite two centuries of evolution, the structure of a modern military staff would be recognizable to Napoleon. At the same time, military organizations have struggled to incorporate new technologies as they adapt to new domains - air, space and information - in modern war. The sizes of military headquarters have grown to accommodate the expanded information flows and decision points of these new facets of warfare. The result is diminishing marginal returns and a coordination nightmare - too many cooks in the kitchen - that risks jeopardizing mission command. AI agents - autonomous, goal-oriented software powered by large language models - can automate routine staff tasks, compress decision timelines and enable smaller, more resilient command posts. They can shrink the staff while also making it more effective. As an international relations scholar and reserve officer in the U.S. Army who studies military strategy, I see both the opportunity afforded by the technology and the acute need for change. That need stems from the reality that today's command structures still mirror Napoleon's field headquarters in both form and function - industrial-age architectures built for massed armies. Over time, these staffs have ballooned in size, making coordination cumbersome. They also result in sprawling command posts that modern precision artillery, missiles and drones can target effectively and electronic warfare can readily disrupt. Russia's so-called "Graveyard of Command Posts" in Ukraine vividly illustrates how static headquarters where opponents can mass precision artillery, missiles and drones become liabilities on a modern battlefield. The role of AI agents Military planners now see a world in which AI agents - autonomous, goal-oriented software that can perceive, decide and act on their own initiative - are mature enough to deploy in command systems. These agents promise to automate the fusion of multiple sources of intelligence, threat-modeling, and even limited decision cycles in support of a commander's goals. There is still a human in the loop, but the humans will be able to issue commands faster and receive more timely and contextual updates from the battlefield. These AI agents can parse doctrinal manuals, draft operational plans and generate courses of action, which helps accelerate the tempo of military operations. Experiments - including efforts I ran at Marine Corps University - have demonstrated how even basic large language models can accelerate staff estimates and inject creative, data-driven options into the planning process. These efforts point to the end of traditional staff roles. There will still be people - war is a human endeavor - and ethics will still factor into streams of algorithms making decisions. But the people who remain deployed are likely to gain the ability to navigate mass volumes of information with the help of AI agents. These teams are likely to be smaller than modern staffs. AI agents will allow teams to manage multiple planning groups simultaneously. For example, they will be able to use more dynamic red teaming techniques - role-playing the opposition - and vary key assumptions to create a wider menu of options than traditional plans. The time saved not having to build PowerPoint slides and updating staff estimates will be shifted to contingency analysis - asking "what if" questions - and building operational assessment frameworks - conceptual maps of how a plan is likely to play out in a particular situation - that provide more flexibility to commanders. Designing the next military staff To explore the optimal design of this AI agent-augmented staff, I led a team of researchers at the bipartisan think tank Center for Strategic & International Studies' Futures Lab to explore alternatives. The team developed three baseline scenarios reflecting what most military analysts are seeing as the key operational problems in modern great power competition: joint blockades, firepower strikes and joint island campaigns. Joint refers to an action coordinated among multiple branches of a military. In the example of China and Taiwan, joint blockades describe how China could isolate the island nation and either starve it or set conditions for an invasion. Firepower strikes describe how Beijing could fire salvos of missiles - similar to what Russia is doing in Ukraine - to destroy key military centers and even critical infrastructure. Last, in Chinese doctrine, a Joint Island Landing Campaign describes the cross-strait invasion their military has spent decades refining. Any AI agent-augmented staff should be able to manage warfighting functions across these three operational scenarios. The research team found that the best model kept humans in the loop and focused on feedback loops. This approach - called the Adaptive Staff Model and based on pioneering work by sociologist Andrew Abbott - embeds AI agents within continuous human-machine feedback loops, drawing on doctrine, history and real-time data to evolve plans on the fly. In this model, military planning is ongoing and never complete, and focused more on generating a menu of options for the commander to consider, refine and enact. The research team tested the approach with multiple AI models and found that it outperformed alternatives in each case. AI agents are not without risk. First, they can be overly generalized, if not biased. Foundation models - AI models trained on extremely large datasets and adaptable to a wide range of tasks - know more about pop culture than war and require refinement. This makes it important to benchmark agents to understand their strengths and limitations. Second, absent training in AI fundamentals and advanced analytical reasoning, many users tend to use models as a substitute for critical thinking. No smart model can make up for a dumb, or worse, lazy user. Seizing the 'agentic' moment To take advantage of AI agents, the U.S. military will need to institutionalize building and adapting agents, include adaptive agents in war games, and overhaul doctrine and training to account for human-machine teams. This will require a number of changes. First, the military will need to invest in additional computational power to build the infrastructure required to run AI agents across military formations. Second, they will need to develop additional cybersecurity measures and conduct stress tests to ensure the agent-augmented staff isn't vulnerable when attacked across multiple domains, including cyberspace and the electromagnetic spectrum. Third, and most important, the military will need to dramatically change how it educates its officers. Officers will have to learn how AI agents work, including how to build them, and start using the classroom as a lab to develop new approaches to the age-old art of military command and decision-making. This could include revamping some military schools to focus on AI, a concept floated in the White House's AI Action Plan released on July 23, 2025. Absent these reforms, the military is likely to remain stuck in the Napoleonic staff trap: adding more people to solve ever more complex problems.
[2]
AI Is About to Radically Alter Military Command Structures That Date Back to Napoleon
Benjamin Jensen, Professor of Strategic Studies at the Marine Corps University School of Advanced Warfighting; Scholar-in-Residence, American University School of International Service This article is republished from The Conversation under a Creative Commons license. Read the original article. Despite two centuries of evolution, the structure of a modern military staff would be recognizable to Napoleon. At the same time, military organizations have struggled to incorporate new technologies as they adapt to new domains Γ’β¬" air, space and information Γ’β¬" in modern war. The sizes of military headquarters have grown to accommodate the expanded information flows and decision points of these new facets of warfare. The result is diminishing marginal returns and a coordination nightmare Γ’β¬" too many cooks in the kitchen Γ’β¬" that risks jeopardizing mission command. AI agents Γ’β¬" autonomous, goal-oriented software powered by large language models Γ’β¬" can automate routine staff tasks, compress decision timelines and enable smaller, more resilient command posts. They can shrink the staff while also making it more effective. As an international relations scholar and reserve officer in the U.S. Army who studies military strategy, I see both the opportunity afforded by the technology and the acute need for change. That need stems from the reality that todayΓ’β¬β’s command structures still mirror NapoleonΓ’β¬β’s field headquarters in both form and function Γ’β¬" industrial-age architectures built for massed armies. Over time, these staffs have ballooned in size, making coordination cumbersome. They also result in sprawling command posts that modern precision artillery, missiles and drones can target effectively and electronic warfare can readily disrupt. RussiaΓ’β¬β’s so-called Γ’β¬ΕGraveyard of Command PostsΓ’β¬ in Ukraine vividly illustrates how static headquarters where opponents can mass precision artillery, missiles and drones become liabilities on a modern battlefield. Military planners now see a world in which AI agents Γ’β¬" autonomous, goal-oriented software that can perceive, decide and act on their own initiative Γ’β¬" are mature enough to deploy in command systems. These agents promise to automate the fusion of multiple sources of intelligence, threat-modeling, and even limited decision cycles in support of a commanderΓ’β¬β’s goals. There is still a human in the loop, but the humans will be able to issue commands faster and receive more timely and contextual updates from the battlefield. These AI agents can parse doctrinal manuals, draft operational plans and generate courses of action, which helps accelerate the tempo of military operations. Experiments Γ’β¬" including efforts I ran at Marine Corps University Γ’β¬" have demonstrated how even basic large language models can accelerate staff estimates and inject creative, data-driven options into the planning process. These efforts point to the end of traditional staff roles. There will still be people Γ’β¬" war is a human endeavor Γ’β¬" and ethics will still factor into streams of algorithms making decisions. But the people who remain deployed are likely to gain the ability to navigate mass volumes of information with the help of AI agents. These teams are likely to be smaller than modern staffs. AI agents will allow teams to manage multiple planning groups simultaneously. For example, they will be able to use more dynamic red teaming techniques Γ’β¬" role-playing the opposition Γ’β¬" and vary key assumptions to create a wider menu of options than traditional plans. The time saved not having to build PowerPoint slides and updating staff estimates will be shifted to contingency analysis Γ’β¬" asking Γ’β¬Εwhat ifΓ’β¬ questions Γ’β¬" and building operational assessment frameworks Γ’β¬" conceptual maps of how a plan is likely to play out in a particular situation Γ’β¬" that provide more flexibility to commanders. To explore the optimal design of this AI agent-augmented staff, I led a team of researchers at the bipartisan think tank Center for Strategic & International StudiesΓ’β¬β’ Futures Lab to explore alternatives. The team developed three baseline scenarios reflecting what most military analysts are seeing as the key operational problems in modern great power competition: joint blockades, firepower strikes and joint island campaigns. Joint refers to an action coordinated among multiple branches of a military. In the example of China and Taiwan, joint blockades describe how China could isolate the island nation and either starve it or set conditions for an invasion. Firepower strikes describe how Beijing could fire salvos of missiles Γ’β¬" similar to what Russia is doing in Ukraine Γ’β¬" to destroy key military centers and even critical infrastructure. Last, in Chinese doctrine, a Joint Island Landing Campaign describes the cross-strait invasion their military has spent decades refining. Any AI agent-augmented staff should be able to manage warfighting functions across these three operational scenarios. The research team found that the best model kept humans in the loop and focused on feedback loops. This approach Γ’β¬" called the Adaptive Staff Model and based on pioneering work by sociologist Andrew Abbott Γ’β¬" embeds AI agents within continuous human-machine feedback loops, drawing on doctrine, history and real-time data to evolve plans on the fly. In this model, military planning is ongoing and never complete, and focused more on generating a menu of options for the commander to consider, refine and enact. The research team tested the approach with multiple AI models and found that it outperformed alternatives in each case. AI agents are not without risk. First, they can be overly generalized, if not biased. Foundation models Γ’β¬" AI models trained on extremely large datasets and adaptable to a wide range of tasks Γ’β¬" know more about pop culture than war and require refinement. This makes it important to benchmark agents to understand their strengths and limitations. Second, absent training in AI fundamentals and advanced analytical reasoning, many users tend to use models as a substitute for critical thinking. No smart model can make up for a dumb, or worse, lazy user. To take advantage of AI agents, the U.S. military will need to institutionalize building and adapting agents, include adaptive agents in war games, and overhaul doctrine and training to account for human-machine teams. This will require a number of changes. First, the military will need to invest in additional computational power to build the infrastructure required to run AI agents across military formations. Second, they will need to develop additional cybersecurity measures and conduct stress tests to ensure the agent-augmented staff isnΓ’β¬β’t vulnerable when attacked across multiple domains, including cyberspace and the electromagnetic spectrum. Third, and most important, the military will need to dramatically change how it educates its officers. Officers will have to learn how AI agents work, including how to build them, and start using the classroom as a lab to develop new approaches to the age-old art of military command and decision-making. This could include revamping some military schools to focus on AI, a concept floated in the White HouseΓ’β¬β’s AI Action Plan released on July 23, 2025. Absent these reforms, the military is likely to remain stuck in the Napoleonic staff trap: adding more people to solve ever more complex problems.
[3]
AI is about to radically alter military command structures that haven't changed much since Napoleon's army
Despite two centuries of evolution, the structure of a modern military staff would be recognizable to Napoleon. At the same time, military organizations have struggled to incorporate new technologies as they adapt to new domains -- air, space and information -- in modern war. The sizes of military headquarters have grown to accommodate the expanded information flows and decision points of these new facets of warfare. The result is diminishing marginal returns and a coordination nightmare -- too many cooks in the kitchen -- that risks jeopardizing mission command. AI agents -- autonomous, goal-oriented software powered by large language models -- can automate routine staff tasks, compress decision timelines and enable smaller, more resilient command posts. They can shrink the staff while also making it more effective. As an international relations scholar and reserve officer in the U.S. Army who studies military strategy, I see both the opportunity afforded by technology and the acute need for change. That need stems from the reality that today's command structures still mirror Napoleon's field headquarters in both form and function -- industrial-age architectures built for massed armies. Over time, these staffs have ballooned in size, making coordination cumbersome. They also result in sprawling command posts that modern precision artillery, missiles and drones can target effectively and electronic warfare can readily disrupt. Russia's so-called "Graveyard of Command Posts" in Ukraine vividly illustrates how static headquarters where opponents can mass precision artillery, missiles and drones become liabilities on a modern battlefield. The role of AI agents Military planners now see a world in which AI agents -- autonomous, goal-oriented software that can perceive, decide and act on their own initiative -- are mature enough to deploy in command systems. These agents promise to automate the fusion of multiple sources of intelligence, threat-modeling, and even limited decision cycles in support of a commander's goals. There is still a human in the loop, but the humans will be able to issue commands faster and receive more timely and contextual updates from the battlefield. These AI agents can parse doctrinal manuals, draft operational plans and generate courses of action, which helps accelerate the tempo of military operations. Experiments -- including efforts I ran at Marine Corps University -- have demonstrated how even basic large language models can accelerate staff estimates and inject creative, data-driven options into the planning process. These efforts point to the end of traditional staff roles. There will still be people -- war is a human endeavor -- and ethics will still factor into streams of algorithms making decisions. But the people who remain deployed are likely to gain the ability to navigate mass volumes of information with the help of AI agents. These teams are likely to be smaller than modern staffs. AI agents will allow teams to manage multiple planning groups simultaneously. For example, they will be able to use more dynamic red teaming techniques -- role-playing the opposition -- and vary key assumptions to create a wider menu of options than traditional plans. The time saved not having to build PowerPoint slides and updating staff estimates will be shifted to contingency analysis -- asking "what if" questions -- and building operational assessment frameworks -- conceptual maps of how a plan is likely to play out in a particular situation -- that provide more flexibility to commanders. Designing the next military staff To explore the optimal design of this AI agent-augmented staff, I led a team of researchers at the bipartisan think-tank Center for Strategic & International Studies' Futures Lab to explore alternatives. The team developed three baseline scenarios reflecting what most military analysts are seeing as the key operational problems in modern great power competition: joint blockades, firepower strikes and joint island campaigns. Joint refers to an action coordinated among multiple branches of a military. In the example of China and Taiwan, joint blockades describe how China could isolate the island nation and either starve it or set conditions for an invasion. Firepower strikes describe how Beijing could fire salvos of missiles -- similar to what Russia is doing in Ukraine -- to destroy key military centers and even critical infrastructure. Last, in Chinese doctrine, a Joint Island Landing Campaign describes the cross-strait invasion their military has spent decades refining. Any AI agent-augmented staff should be able to manage warfighting functions across these three operational scenarios. The research team found that the best model kept humans in the loop and focused on feedback loops. This approach -- called the Adaptive Staff Model and based on pioneering work by sociologist Andrew Abbott -- embeds AI agents within continuous human-machine feedback loops, drawing on doctrine, history and real-time data to evolve plans on the fly. In this model, military planning is ongoing and never complete, and focused more on generating a menu of options for the commander to consider, refine and enact. The research team tested the approach with multiple AI models and found that it outperformed alternatives in each case. AI agents are not without risk. First, they can be overly generalized, if not biased. Foundation models -- AI models trained on extremely large datasets and adaptable to a wide range of tasks -- know more about pop culture than war and require refinement. This makes it important to benchmark agents to understand their strengths and limitations. Second, absent training in AI fundamentals and advanced analytical reasoning, many users tend to use models as a substitute for critical thinking. No smart model can make up for a dumb -- or worse, lazy -- user. Seizing the 'agentic' moment To take advantage of AI agents, the U.S. military will need to institutionalize building and adapting agents, include adaptive agents in war games, and overhaul doctrine and training to account for human-machine teams. This will require a number of changes. First, the military will need to invest in additional computational power to build the infrastructure required to run AI agents across military formations. Second, they will need to develop additional cybersecurity measures and conduct stress tests to ensure the agent-augmented staff isn't vulnerable when attacked across multiple domains, including cyberspace and the electromagnetic spectrum. Third, and most important, the military will need to dramatically change how it educates its officers. Officers will have to learn how AI agents work, including how to build them, and start using the classroom as a lab to develop new approaches to the age-old art of military command and decision-making. This could include revamping some military schools to focus on AI, a concept floated in the White House's AI Action Plan released on July 23, 2025. Absent these reforms, the military is likely to remain stuck in the Napoleonic staff trap: adding more people to solve ever more complex problems. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Share
Copy Link
AI agents are set to transform military command structures, automating routine tasks and enabling smaller, more effective teams. This shift promises to modernize outdated systems and address vulnerabilities in current command posts.
Military command structures have remained largely unchanged since Napoleon's era, despite significant technological advancements in warfare. These industrial-age architectures, designed for massed armies, have struggled to incorporate new technologies and adapt to modern warfare domains such as air, space, and information 1. As a result, military headquarters have grown in size to accommodate expanded information flows and decision points, leading to diminishing returns and coordination challenges that risk jeopardizing mission command 2.
Source: Gizmodo
Artificial Intelligence (AI) agents, powered by large language models, are poised to revolutionize these outdated command structures. These autonomous, goal-oriented software systems can automate routine staff tasks, compress decision timelines, and enable smaller, more resilient command posts 3. Key benefits include:
Experiments conducted at Marine Corps University have demonstrated that even basic large language models can accelerate staff estimates and introduce creative, data-driven options into the planning process 1.
Source: The Conversation
The need for change is acute, as current command structures have become increasingly vulnerable to modern warfare tactics. Sprawling command posts are now easily targetable by precision artillery, missiles, and drones, and susceptible to electronic warfare disruption. The "Graveyard of Command Posts" in Ukraine serves as a stark example of how static headquarters have become liabilities on the modern battlefield 2.
Research conducted by the Center for Strategic & International Studies' Futures Lab has identified the Adaptive Staff Model as the optimal design for AI agent-augmented staff 3. This approach:
The model emphasizes ongoing military planning, generating a menu of options for commanders to consider, refine, and enact. Testing with multiple AI models has shown that this approach outperforms alternatives across various operational scenarios 1.
While the integration of AI agents in military command structures offers significant advantages, it is not without risks. AI models can be overly generalized or biased, often knowing more about pop culture than warfare. This necessitates careful refinement and benchmarking to understand their strengths and limitations 3.
As military organizations move towards adopting AI-augmented command structures, the focus will shift from traditional staff roles to more dynamic and adaptive planning processes. This transformation promises to enhance military effectiveness while addressing the vulnerabilities of current command systems in the face of modern warfare challenges.
Summarized by
Navi
[1]
NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather, potentially improving the protection of Earth's critical infrastructure from solar storms.
5 Sources
Technology
4 hrs ago
5 Sources
Technology
4 hrs ago
Meta introduces an AI-driven voice translation feature for Facebook and Instagram creators, enabling automatic dubbing of content from English to Spanish and vice versa, with plans for future language expansions.
8 Sources
Technology
20 hrs ago
8 Sources
Technology
20 hrs ago
OpenAI CEO Sam Altman reveals plans for GPT-6, focusing on memory capabilities to create more personalized and adaptive AI interactions. The upcoming model aims to remember user preferences and conversations, potentially transforming the relationship between humans and AI.
2 Sources
Technology
20 hrs ago
2 Sources
Technology
20 hrs ago
Chinese AI companies DeepSeek and Baidu are making waves in the global AI landscape with their open-source models, challenging the dominance of Western tech giants and potentially reshaping the AI industry.
2 Sources
Technology
4 hrs ago
2 Sources
Technology
4 hrs ago
A comprehensive look at the emerging phenomenon of 'AI psychosis', its impact on mental health, and the growing concerns among experts and tech leaders about the psychological risks associated with AI chatbots.
3 Sources
Technology
4 hrs ago
3 Sources
Technology
4 hrs ago