Infosys has launched open-source Responsible AI toolkit that aims to address risks and ethical concerns. This initiative forms a component of the Infosys Topaz Responsible AI Suite, aimed at enabling enterprises to innovate responsibly, while mitigating risks associated with ethical AI adoption.
The Infosys Responsible AI toolkit builds on the company’s AI3S framework (scan, shield and steer). It provides enterprises with advanced defensive technical guardrails, including specialised AI models and shielding algorithms. These features help detect and mitigate issues such as privacy breaches, security attacks, sensitive information leaks, biased outputs, harmful content, copyright infringement, hallucinations, malicious use and deepfakes.
Additionally, the toolkit enhances model transparency by offering insights into the rationale behind AI-generated outputs without compromising performance or user experience. The open-source nature of the toolkit ensures flexibility and ease of implementation. It is fully customisable, compatible with diverse models and agentic AI systems, and integrates seamlessly across cloud and on-premise environments.
ALSO READAviation flying high – 4 factors driving higher Q4 estimates are…
Balakrishna DR, executive vice president, global services head, AI and Industry Verticals, Infosys, said, “As AI becomes central to driving enterprise growth, its ethical adoption is no longer optional. The Infosys Responsible AI toolkit ensures that businesses remain resilient and trustworthy while navigating the AI revolution. By making the toolkit open source, we are fostering a collaborative ecosystem that addresses the complex challenges of AI bias, opacity and security. It’s a testament to our commitment to making AI safe, reliable and ethical for all.”
Organisations can access the Infosys Responsible AI toolkit through the company’s official platforms.
» Read More