AI Verify Testing Framework

What is AI Verify testing framework?

The AI Verify Testing Framework helps companies assess the responsible implementation of their AI system against 11 internationally recognised AI governance principles. While the Traditional AI version has been available since 2022, it has now been updated to include considerations for Generative AI applications.

The framework is aligned with other international frameworks such as those from EU, G7, OECD, and US. It also complements our technical testing tools for Traditional AI and Generative AI (tool & guidelines).

Each principle has desired outcomes that can be achieved through specified processes.  The implementation of these processes can be validated through documentary evidence.

Principles

Overarching consideration which AI applications should be able to adhere to

Outcomes

For every principle, there are desired outcomes. It could be technical and non-technical processes alongside with technical tests where applicable

Processes

Testing processes are actionable steps to be carried out to achieve the desired outcomes

Evidence

These processes are validated by documentary evidence

11 AI Governance Principles

1. Transparency

Learn more
Ability to provide responsible disclosure to those affected by AI systems to understand the outcome

2. Explainability

Learn more
Ability to assess the factors that led to the AI system’s decision, its overall behaviour, outcomes, and implications

3. Repeatability / Reproducibility

Learn more
The ability of a system to consistently perform its required functions under stated conditions for a specific period of time, and for an independent party to produce the same results given similar inputs

4. Safety

Learn more
AI should not result in harm to humans (particularly physical harm), and measures should be put in place to mitigate harm

5. Security

Learn more
AI security is the protection of AI systems, their data, and the associated infrastructure from unauthorised access, disclosure, modification, destruction, or disruption. AI systems that can maintain confidentiality, integrity, and availability through protection mechanisms that prevent unauthorized access and use may be said to be secure

6. Robustness

Learn more
AI system should be resilient against attacks and attempts at manipulation by third party malicious actors, and can still function without producing undesirable output despite unexpected input

7. Fairness

Learn more
AI should not result in unintended and inappropriate discrimination against individuals or groups

8. Data Governance

Learn more
Governing data used in AI systems, including putting in place good governance practices for data quality, lineage, and compliance

9. Accountability

Learn more
AI systems should have organisational structures and actors accountable for the proper functioning of AI systems

10. Human Agency and Oversight

Learn more
Ability to implement appropriate oversight and control measures with humans-in-the-loop at the appropriate juncture

11. Inclusive Growth, Societal and Environmental Well-being

Learn more
The potential for trustworthy AI to contribute to overall growth and prosperity for all – individuals, society, and the planet – and advance global development objectives

How to use and complete the Testing Framework

Use tool to document your framework implementation and generate report

The testing framework (generative AI) is available in a tool for you to systematically evaluate and document the responsible AI practices implemented during the deployment of your Generative AI applications.

If you use the tool to complete the process checks, you can generate a summary report that shows your alignment with AI Verify testing framework. You can also use this report to identify areas for improvement, demonstrate responsible AI practices, and build trust with your stakeholders

Start using the process checks tool today to strengthen your organisation’s AI governance, build stakeholder trust, and showcase commitment to responsible AI practices!

International Alignment

AI Verify testing framework is consistent with international AI governance frameworks such as those from European Union, OECD and US. The framework is mapped to other international frameworks, including:

  • U.S. National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework (US NIST AI RMF)
  • U.S. NIST AI RMF: Generative Artificial Intelligence Profile (US NIST AI RMF – Generative AI Profile)
  • Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems (Hiroshima Process CoC)
  • International Organisation for Standardisation (ISO) ISO/IEC 42001

Who should use AI Verify testing framework?

AI Verify is an AI governance testing framework to help companies assess the responsible implementation of their AI system against 11 internationally recognised AI governance principles.

  • AI Application Owners / Developers looking to demonstrate and document responsible AI governance practices
  • Internal Compliance Teams looking to ensure responsible AI practices have been implemented
  • External Auditors looking to validate your clients’ implementation of responsible AI practices
Background – How the testing framework was developed?

AI Verify was first developed in consultation with companies from different sectors and of different scale. These companies include – AWS, DBS Bank, Google, Meta, Microsoft, Singapore Airlines, NCS (Part of Singtel Group)/Land Transport Authority, Standard Chartered Bank, UCARE.AI, and X0PA.AI.

On 25 May 2022, AI Verify testing framework and software toolkit was launched as a Minimum Viable Product (MVP) for international pilot and feedback by IMDA and PDPC.

On 7 June 2023, AI Verify testing framework and toolkit was open-sourced in GitHub

On 29 May 2025, the updated AI Verify testing framework was released. The testing framework has been enhanced to address risks posed by Gen AI. With this update, companies can now apply AI Verify testing framework for both traditional and Gen AI use cases

Testing report

Download a dummy report to understand what an AI Verify report comprises.

Use AI Verify testing framework today!

Thank you for completing the form. Your submission was successful.

Preview all the questions

1

Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?

2

Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?

3

Your reasons for using AI Verify – Why did your organisation decide to use AI Verify?

4

Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?

5

Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?

6

Your thoughts on trustworthy AI – Why is demonstrating trustworthy AI important to your organisation and to any other organisations using AI systems? Would you recommend AI Verify? How does AI Verify help you demonstrate trustworthy AI?
Enter your name and email address below to download the Discussion Paper by Aicadium and IMDA.
Disclaimer: By proceeding, you agree that your information will be shared with the authors of the Discussion Paper.