As artificial intelligence (AI) technologies continue to evolve at an unprecedented pace, the need for secure, transparent, and trustworthy AI systems has become a national priority. To strengthen India’s AI governance and safety infrastructure, IndiaAI, an independent business division under the Ministry of Electronics and Information Technology (MeitY), has taken a major step forward under its Safe & Trusted AI pillar. Through the second round of Expression of Interest (EoI) launched on December 10, 2024, IndiaAI aims to promote research and innovation focused on AI safety, bias mitigation, and forensic intelligence.
Overview of the Initiative
IndiaAI’s initiative received over 400 proposals from academic institutions, startups, research organizations, and civil society groups. A multi-stakeholder committee comprising technical experts evaluated the submissions to identify projects that align with India’s vision of responsible AI. As a result, five innovative projects have been selected to drive advancements in AI safety and governance, each addressing critical themes such as deepfake detection, bias auditing, and generative AI security.
Selected Projects under Safe & Trusted AI Pillar
-
Deepfake Detection Tools
-
Saakshya: Developed by IIT Jodhpur (CI) and IIT Madras, this project introduces a Multi-Agent, RAG-Enhanced Framework for deepfake detection and governance. It aims to strengthen India’s capability to detect and regulate synthetic media.
-
AI Vishleshak: A collaboration between IIT Mandi and the Directorate of Forensic Services, Himachal Pradesh, this system enhances detection of audio-visual deepfakes and forged handwritten signatures using explainable and robust AI techniques.
-
-
Real-Time Voice Deepfake Detection System
-
Developed by IIT Kharagpur, this cutting-edge system focuses on identifying voice-based deepfakes in real time, a crucial capability for preventing misinformation and fraud in digital communications.
-
-
Bias Mitigation in AI Models
-
Digital Futures Lab and Karya have undertaken a project titled Evaluating Gender Bias in Agriculture LLMs. The goal is to create Digital Public Goods (DPG) for fair data benchmarking and to mitigate gender bias in agricultural AI systems, ensuring inclusivity and fairness.
-
-
Penetration Testing and Evaluation of AI Models
-
Globals ITES Pvt. Ltd. in partnership with IIIT Dharwad will develop Anvil, a specialized tool designed for penetration testing and evaluation of Large Language Models (LLMs) and Generative AI systems. This tool aims to identify vulnerabilities, strengthen security, and ensure responsible deployment of AI.
-
Driving Safe and Responsible AI Development in India
These five projects embody IndiaAI’s mission to establish a Safe & Trusted AI ecosystem that promotes ethical practices, inclusivity, and technological resilience. They will collectively strengthen India’s AI capabilities in deepfake forensics, bias detection, and generative AI risk assessment.
By combining the strengths of academia, industry, and civil society, IndiaAI continues to translate policy into practice. The initiative marks a significant stride in AI governance, ensuring that AI solutions deployed in India are secure, reliable, and aligned with democratic values.
About IndiaAI
IndiaAI operates as an Independent Business Division under MeitY, serving as the implementation agency for the IndiaAI Mission. Its goal is to democratize the benefits of AI, promote technological self-reliance, and ensure ethical and responsible AI use across all sectors. By fostering innovation and collaboration, IndiaAI aims to position India as a global leader in trustworthy AI development.