The IT ministry has invited proposals from entities for the development of technology tools to create a trusted AI ecosystem, including the detection of deepfakes, as per information published on Meity's website.
As part of IndiaAI mission, the Safe and Trusted AI pillar envisages the development of indigenous tools and frameworks and self-assessment checklists for innovators, among others, to put in place adequate guardrails to advance the responsible adoption of AI.
"To spearhead this movement, IndiaAI is calling for Expressions of Interest (EOI) from individuals and organisations that want to lead AI development projects to foster accountability, mitigate AI harms and promote fairness in AI practices," the note for proposal said.
Miety has invited proposals for watermarking and labelling tools to authenticate AI-generated content, ensuring it is traceable, secure, and free of harmful materials.
The proposal calls for the need to establish AI frameworks that align with global standards, ensuring AI respects human values and promotes fairness.
The proposal includes the creation of "Deepfake Detection Tools to enable real-time identification and mitigation of deepfakes, preventing misinformation and harm for a secure and trustworthy digital ecosystem".
Meity, in the proposal, also seeks the creation of risk management tools and frameworks to enhance the safe deployment of AI in public services, stress-testing tools to evaluate how AI models perform under extreme scenarios, detect vulnerabilities, and build trust in AI for critical applications.
(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)