"Our goal is explainable AI. Not just saying it's fake, but also showing the why and how."
Today, in the era of impersonation, digital fakes are crossing over into real-life fraud, and traditional verification methods are no longer sufficient.
As AI gets smarter, so do the fraudsters. At a time when anyone with the right tools can pretend to be someone else, not just on audio but even on a video call, one Indian startup is building tech to spot the fakes and fight back.
Pune-based pi-labs is using AI to beat AI with a tool designed to expose fake content and restore trust in what we see and hear online.
In an exclusive interaction with AIM, Ankush Tiwari, the company's founder and CEO, expands on its AI tool 'Authentify', which is already helping banks, law enforcement, and other security-sensitive sectors distinguish between real and manipulated media.
During a live demo call, pi-labs' tech team showed just how easily deepfake attacks can be carried out. Within seconds, one team member transformed into Hollywood actor Tom Cruise, using only a static image and a few clicks. Soon, a realistic impersonation that could fool not only people but also existing security systems came to life.
"We created this deepfake in under two days," said Naman Kohli, marketing director at pi-labs. "It even passed the liveness detection test built into video KYC systems that banks use."
These kinds of attacks are no longer hypothetical. Tiwari recounted a personal "litmus test" he ran with a deepfake video call to his own mother, who couldn't tell it wasn't him.
Scenarios like this are now common in fraud cases. Criminals impersonate someone's child or boss on a call and convince them to send money. One case reportedly involved $25 million being transferred after a fake video call from a CEO.
To counter this growing threat, pi-labs developed Authentify, a detection engine that analyses media content frame by frame to identify and highlight manipulated segments. Users can upload a video, image, or audio clip and get a detailed report that flags any synthetic content.
The engine doesn't just flag fake content but also explains why. Using models trained on millions of faces from diverse geographies and cultures, including Indian-specific contexts such as turbans and bindis, Authentify identifies unnatural pixel movements, inconsistencies in eye and lip sync, and other subtle signals.
"Our goal is explainable AI," said Kohli. "Not just saying it's fake, but also showing the why and how."
The tool can run in real-time during video calls or after a call ends. It's also customisable, allowing clients to opt for either cloud-based or on-premise setups, a feature particularly important to intelligence and defence agencies wary of sending data outside secure environments.
The system doesn't stop at video. It also analyses audio, detecting synthetic speech and tracing it back to the tools used to generate it, whether it's Speechify, Descript, or any other voice cloning platform.
Unlike many global solutions that struggle with the Indian context, pi-labs has made localisation a priority. The team trained Authentify using data that includes religious and regional clothing, in a bid to prevent false flagging caused by misinterpreted cultural elements.
"Many foreign tools flag a turban or tika as manipulation," Kohli explained. "But our model understands the context."
The company's focus is currently sharp on law enforcement, defence, and financial services. These are sectors where getting things wrong can have serious consequences. In partnership with firms like Finneca Solutions and Pune-based Accops, pi-labs is helping secure video KYC processes that are increasingly targeted by fraudsters.
There's more to come. The startup is working on integrating blockchain for better transparency and traceability, not for crypto, but to log and verify detection outcomes in a tamper-proof way. "It's all about taking regulators along while building," said Tiwari.
pi-labs is also part of NVIDIA's Inception programme, giving them access to discounted GPUs and technical resources to help scale faster. That's a crucial boost, given how much compute is required to train and constantly update AI models.
Tiwari, who comes from a family background in cybersecurity and defence, is blunt about what needs to change. "In the physical world, we buy a house and lock it. In software, we build apps and don't even install antivirus."
For him, the digital world is now the real world, and we're more vulnerable than ever. From HR hiring to finance to everyday communication, deepfakes are altering how trust is established online.
"We used to say we live in a software-defined world," he said. "Now we live in an algorithm-controlled one."
As the speed and realism of deepfakes continue to evolve, pi-labs is in a constant state of catch-up, rolling out updates every four to six weeks like an antivirus company, learning from each new attack method.
And that's their approach, staying a calculated step behind, but catching up fast enough to make the difference.