2 Sources
2 Sources
[1]
Institutionalise human control for AI-enabled systems: Lt Gen Vipul Singhal
India is embarking on a new era of responsible AI implementation, as highlighted by Lt Gen Vipul Singhal's call for robust human oversight. This initiative focuses on establishing clear roles for AI, ensuring that human judgment plays a pivotal role in decision-making. Recognizing AI systems as critical components of military assets, they will undergo stringent testing protocols. New Delhi: India must adopt AI-enabled systems responsibly with institutionalised human control and clearly defined functions, which may be assisted and recommended by AI, Lt Gen Vipul Singhal, DCOAS (IS&T) said on Wednesday. With India emerging as major military power with rapidly growing AI ecosystem and being a civilisation that has long understood that power must be governed by restraint, the country has both capability and credibility to lead the world in using AI responsibly in conflict, he said while addressing at a session on 'Defence Perspective in AI' here at the AI Impact Summit. "For India, the question is never as to whether we should adopt AI enabled systems, but how. We are clear that this transition must be undertaken responsibly," he said. Elaborating on what responsible and effective approach looks like, Lt Gen Singhal said, "First, we must institutionalise human control, not as a slogan, but as law. This requires clearly defining which functions may be assisted by AI, which may be recommended by AI, and which may must always remain human decisions." AI can inform decisions, but only humans can exercise judgment and bear responsibility for them, he asserted. Second, AI-enabled systems need to be treated as a weapon system and be tested accordingly, he said, adding that the most chaotic data environment is the battlefield, and AI trained on very clear satellite images given to a computer lab will fail when it sees grainy, mud-soap, smoke, deception-like images in a battlefield, and can produce a wrong decision. "Therefore, there needs to be the same certification, same red teaming and same TILE evaluation of AI-enabled systems," he noted. Stressing on sovereignty and trust, he said, "How does a commander trust the data that is being fed into the AI-enabled system to give him the decision support. There needs to be more clarity on that." Expressing confidence that AI as the new looming technology on the forefront will also find ways and means to be regulated, he said, adding that the United Nations Convention on certain conventional weapons is already on. There are many governments, many states that are part of the discussion. Consensus is complex, but it will come, he said. Stating that India today stands at the cusp of three powerful realities, he said, "We are a major military power. We are a rapidly growing AI country or AI ecosystem, and we are a civilisation that has long understood that power must be governed by restraint." India's ethos of "Shakti must go hand in hand with Dharma or righteousness" gives it "both capability and credibility to lead the world in using AI responsibly in conflict", he said, adding, "great power comes with great responsibility". Lt Gen Singhal also informed the gathering that the Indian armed forces and the Indian army are cognizant of the transformative power of AI to increase their official efficiency. "We are making every effort with direction to ensure that AI is incorporated into our decision support systems, into our surveillance recce all the other function we do," he said. The Indian armed forces are working with industry leaders, startups and academic institutions to harness AI for military applications, "drawing strength from India's vibrant innovation ecosystem and our own growing band of uniform innovators". (You can now subscribe to our Economic Times WhatsApp channel)
[2]
How India's Army Is Deploying AI in Military Operations
"AI can inform decisions. Only humans can make the judgment and take responsibility," said Lt. Gen. Vipul Singhal during a defence-focused session at the India AI Impact Summit. Singhal's remark framed a broader debate at the summit on how India's armed forces are deploying artificial intelligence across operations, logistics, intelligence, and decision-support systems. However, speakers repeatedly stressed that faster analysis and compressed decision timelines cannot dilute command responsibility, particularly in situations involving the use of force. Taken together, military leaders, defence scientists, industry executives, and academics converged on a central message: India must deploy AI as a force multiplier without surrendering moral agency, operational control, or strategic autonomy. Senior Army officers described AI as operational rather than experimental. "AI is totally transforming the way we analyse, decide and act, and transforming warfare," said Brig. Deepak Kumar. Notably, Lt. Gen. Rajiv Kumar Sahni, Director General of Electronics and Mechanical Engineering, said military effectiveness increasingly depends not on platforms alone but on engineering support, sustainment, and decision velocity. "It is the engineering support which provides the flexibility, endurance, and stamina to commanders in the field," Sahni said. Moreover, Sahni outlined three priority areas where the Army is actively seeking collaboration with industry and academia: "Help us place sensors at the right place, manipulate the data elements we already have, and give us predictive insights," Sahni said. He added that drones are no longer peripheral systems but a central focus of Army engineering, with emphasis on indigenous navigation, control analytics, production quality, and adversarial simulation to test performance in contested environments. Army leaders rejected the idea that modernisation requires replacing large parts of India's existing arsenal. "Legacy is not equal to obsolete," said Maj. Gen. Mohit Gandhi. Gandhi said cost, logistical familiarity, and operational constraints make wholesale replacement unrealistic. Instead, the Army has prioritised embedding sensors, analytics, and AI into existing platforms. "There are limited labelled datasets available for military equipment," he said, referring to the lack of high-quality historical data needed to train AI systems reliably in combat settings. Furthermore, he said AI systems must remain explainable, resilient to jamming and spoofing, and capable of operating on secure or offline networks, with humans firmly in the loop to comply with the laws of armed conflict. Beyond battlefield decision systems, the Army is also applying AI to core sustainment functions. Maj. Gen. P. S. Bindra framed predictive maintenance as a direct battlefield advantage, particularly for armoured fighting vehicles operating in extreme climates. "These machines are speaking to us," Bindra said. "Are we listening? Yes. But we need to now listen to them better." Bindra said the Army plans to move from scheduled maintenance to condition-based monitoring using sensors, data loggers, and AI models that predict residual useful life, or how long a component can safely operate before failure. Importantly, he said this work is moving beyond conceptual pilots. The Army has initiated indigenous R&D projects, plans to float bids on the Government e-Marketplace (GeM), the government's online procurement platform, and will follow a pilot-to-scale approach, with successful systems eventually deployed across platforms and commands. Ethical concerns sharpened when speakers discussed AI-enabled decision-making in combat. Lt. Gen. Vipul Singhal described a high-tempo operation in which a machine-generated analysis recommended an immediate strike. "The commander paused," Singhal said. "What does the machine not know?" The data showed adversary troops. However, it failed to capture an ongoing civilian evacuation. As a result, the commander stopped the strike. Importantly, speakers said AI increases leadership burden rather than reducing it, as compressed decision cycles raise the risk of escalation if human judgment is sidelined. They raised core questions: "Are we subjecting AI systems to the same rigor as other weapon systems?" Singhal asked. Maj. Gen. Harsh Chhibber warned against treating AI outputs as morally neutral. "The requirement is to make better decisions, not bad decisions faster," Chhibber said. He said AI systems fail when battlefield context changes and lack abductive reasoning. Referring to the Israeli military's Lavender database, Chhibber highlighted the ethical consequences of statistical error, noting that even high accuracy rates can translate into large numbers of wrongful deaths at scale. "Command responsibility is absolute in the military," he said. "You cannot do cognitive offloading to a machine." Accordingly, he said any decision involving lethality must remain under human agency, with accountability resting squarely with the command, not with algorithms, developers, or statistical thresholds. Academic speakers stressed that defence AI must prioritise explainability, observability, and override mechanisms. "AI can be used for situation summarisation and pattern recognition," said Prof. Ramakrishna, "but domain experts are better at precision." Moreover, he said commanders must remain in the driver's seat, with visibility into what is being delegated to data pipelines, models, and hardware accelerators. "We need a glass box model," Ramakrishna said, referring to systems whose logic and decision paths can be inspected and overridden, unlike opaque black-box models. Despite growing deployment, several speakers said defence AI governance remains underdeveloped. Pawan Anand said military AI differs fundamentally from civilian applications. "This is probabilistic technology being used for deterministic outcomes," Anand said, referring to systems that produce statistical predictions but are used in life-and-death decisions. He identified gaps that remain largely unaddressed: "You have to ensure you can destroy it at the right time if you lose control," Anand said, adding that responsibility must be embedded across the entire AI lifecycle, not just at deployment. Speakers repeatedly warned against dependence on foreign AI systems. "Off-the-shelf AI is strategic suicide," one speaker said. Singhal also flagged India's reliance on imported Graphics Processing Units (GPUs), or specialised chips used to train and run AI models, as a long-term vulnerability. "We are reliant on imports," he said. "We need indigenous capability over the long term." Additionally, industry leaders stressed the need for sovereign platforms, alternative compute approaches, and on-premise edge systems. "The algorithms that run on the cloud may not come to your rescue in a battlefield scenario," one speaker said. Finally, speakers outlined several proposals: Notably, Sahni acknowledged that experimentation would involve failure. "Failures are part of our success story," he said, signalling a shift from traditional zero-failure defence acquisition models toward iterative AI development. Ultimately, speakers drew a firm boundary. "Technology without warrior spirit is hollow," one speaker said. "But warrior spirit without technology in the modern battlefield is a tragedy." As India accelerates defence AI adoption, military leaders said the challenge lies not in deploying AI faster, but in ensuring humans remain firmly responsible for its consequences.
Share
Share
Copy Link
India's military leadership is establishing strict protocols for AI deployment in defense operations. Lt Gen Vipul Singhal emphasized that AI-enabled systems must undergo rigorous testing and maintain institutionalized human oversight. The Indian Army is actively deploying AI across surveillance, logistics, and decision support while ensuring commanders retain ultimate responsibility for lethal decisions.
The Indian Army is deploying AI in military operations with a clear mandate: human control over AI-enabled systems must be institutionalized as law, not merely as principle. Speaking at the AI Impact Summit, Lt Gen Vipul Singhal, DCOAS (IS&T), outlined India's approach to responsible AI implementation, emphasizing that while artificial intelligence can inform decisions, only humans can exercise judgment and bear accountability
1
. The framework distinguishes between functions that may be assisted by AI, those that may be recommended by AI, and those that must always remain human decisions.
Source: ET
This approach reflects India's position as both a major military power and a rapidly growing AI ecosystem. Lt Gen Vipul Singhal noted that India's civilizational ethos—where "Shakti must go hand in hand with Dharma"—provides both capability and credibility to lead responsible AI use in conflict
1
. Military leaders at the summit converged on a central message: deploy AI as a force multiplier without surrendering moral agency, operational control, or strategic autonomy2
.The Indian Army is treating AI-enabled systems as weapon systems that require the same certification, red teaming, and TILE evaluation protocols. Lt Gen Singhal highlighted a critical challenge: AI trained on clear satellite images in controlled environments will fail when confronted with grainy, smoke-filled, deception-laden battlefield imagery, potentially producing wrong decisions
1
. The most chaotic data environment is the battlefield, demanding stringent testing before deployment.
Source: MediaNama
Maj. Gen. Mohit Gandhi noted that limited labeled datasets available for military equipment compound these challenges. AI systems must remain explainable, resilient to jamming and spoofing, and capable of operating on secure or offline networks, with humans firmly in the loop to comply with laws of armed conflict
2
. Questions of sovereignty and trust remain paramount—commanders must trust the data fed into decision support systems that inform their tactical choices.Far from reducing leadership burden, AI-enabled decision-making actually increases human command responsibility. Lt Gen Singhal described a high-tempo operation where machine-generated analysis recommended an immediate strike against adversary troops. The commander paused, asking what the machine did not know. The data had failed to capture an ongoing civilian evacuation, and the commander stopped the strike
2
.Maj. Gen. Harsh Chhibber emphasized that command responsibility is absolute in the military, warning against cognitive offloading to machines. "The requirement is to make better decisions, not bad decisions faster," he stated, referencing the Israeli military's Lavender database and noting that even high accuracy rates can translate into large numbers of wrongful deaths at scale
2
. Compressed decision cycles raise escalation risks if human judgment is sidelined.Related Stories
Beyond combat applications, the Indian Army is actively deploying AI across surveillance recce, logistics, and predictive maintenance. "AI is totally transforming the way we analyse, decide and act, and transforming warfare," said Brig. Deepak Kumar
2
. Lt. Gen. Rajiv Kumar Sahni outlined three priority collaboration areas: placing sensors at the right locations, manipulating existing data elements, and providing predictive insights.Maj. Gen. P. S. Bindra framed predictive maintenance as a direct battlefield advantage, particularly for armoured fighting vehicles operating in extreme climates. "These machines are speaking to us. Are we listening? Yes. But we need to now listen to them better," he said
2
. The Army plans to move from scheduled maintenance to condition-based monitoring using sensors, data loggers, and algorithms that predict residual useful life. Indigenous R&D projects are underway, with plans to float bids on the Government e-Marketplace (GeM) and follow a pilot-to-scale approach.Drones have become a central focus, with emphasis on indigenous navigation, control analytics, production quality, and adversarial simulation. Maj. Gen. Mohit Gandhi rejected wholesale modernization, noting that "legacy is not equal to obsolete." Cost, logistical familiarity, and operational constraints make the Army prioritize embedding sensors, analytics, and AI into existing platforms
2
.The Indian armed forces are working with industry leaders, startups, and academic institutions to harness AI for military applications, drawing strength from India's vibrant innovation ecosystem and a growing band of uniform innovators
1
. Lt Gen Singhal expressed confidence that AI will find ways to be regulated, noting that the United Nations Convention on certain conventional weapons is already addressing these issues, with many governments participating in complex but eventual consensus-building discussions.The emphasis on explainability, trust, and accountability reflects India's commitment to ensuring that faster analysis and compressed decision timelines do not dilute command responsibility, particularly in situations involving use of force. As military effectiveness increasingly depends on engineering support, sustainment, and decision velocity rather than platforms alone, India's approach balances technological advancement with ethical constraints and strategic autonomy in an evolving security landscape.
Summarized by
Navi
02 Feb 2026•Policy and Regulation

21 Jan 2026•Policy and Regulation

16 Feb 2026•Policy and Regulation

1
Policy and Regulation

2
Technology

3
Technology
