Infosys Launches Open-Source Responsible AI Toolkit to Enhance Trust and Transparency in AI

3 Sources

Share

Infosys has introduced an open-source Responsible AI Toolkit as part of its Topaz Responsible AI Suite, aiming to address AI-related risks and foster ethical AI adoption across industries.

News article

Infosys Unveils Open-Source Responsible AI Toolkit

Infosys, a global leader in digital services and consulting, has launched an open-source Responsible AI Toolkit as part of its Infosys Topaz Responsible AI Suite. This initiative aims to address the growing concerns surrounding ethical AI adoption and enhance trust and transparency in AI systems

1

.

Key Features of the Responsible AI Toolkit

The toolkit is built on Infosys' AI3S framework (Scan, Shield, and Steer) and offers several advanced features:

  1. Defensive Technical Guardrails: Specialized AI models and shielding algorithms to detect and mitigate security threats, privacy breaches, biased outputs, deepfakes, and other risks

    2

    .

  2. Enhanced Explainability: Provides insights into AI-generated decisions without compromising performance or user experience

    1

    .

  3. Flexibility and Compatibility: The toolkit is customizable and compatible with diverse AI systems, supporting both cloud and on-premise deployments

    3

    .

Industry Recognition and Support

The launch of the Responsible AI Toolkit has garnered support from various industry leaders and government officials:

  • Joshua Bamford, Head of Science, Technology and Innovation at the British High Commission, called it a "benchmark for technological excellence"

    2

    .

  • Sunil Abraham, Public Policy Director at Meta, emphasized the importance of open-source tools in ensuring AI safety and diversity

    1

    .

  • Abhishek Singh, Additional Secretary at MeitY, welcomed the move, stating that it would help mitigate bias in AI models and enhance security, privacy, and fairness in AI-based solutions

    2

    .

Infosys' Commitment to Responsible AI

Infosys has been actively advancing Responsible AI initiatives:

  1. Establishment of the Responsible AI Office and dedicated offerings

    2

    .

  2. ISO 42001:2023 certification on AI management systems

    3

    .

  3. Participation in global AI policy discussions through organizations such as the NIST AI Safety Institute Consortium, WEF AIGA, AI Alliance, and Stanford HAI

    1

    .

By open-sourcing the Responsible AI Toolkit, Infosys aims to foster a collaborative ecosystem that addresses the complex challenges of AI bias, opacity, and security. This move reinforces the company's commitment to making AI safe, reliable, and ethical for all stakeholders in the rapidly evolving AI landscape.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo