You are here:

/

/

AI Assurance Roundtable: Lessons from the Global AI Assurance Pilot

AI Assurance Roundtable: Lessons from the Global AI Assurance Pilot

🚀 100+ participants. 33+ companies. 9 geographies. 10 industries. Hundreds of hours.

Together with its parent IMDA, the AIVF launched the Global AI Assurance Pilot in February 2025, to help codify emerging norms and best practices around technical testing of Generative AI (“GenAI”) applications. The Pilot:

  • Paired AI assurance and testing providers with organisations deploying Generative AI applications
  • Focused on technical testing of the real-life application (not the underlying foundation model)
  • Used the lessons learnt from specific examples to create generalisable insights on “what and how to test”

AIVF and IMDA released the insights and case studies from the Global AI Assurance Pilot on 29 May 2025 at the Asia Tech x Summit 2025.

Lessons from the pilot:

Read the Main Report from the Pilot here

Read the 17 case studies from the Pilot here.

Hello Test

At Asia Tech x Summit 2025, we also hosted a roundtable for industry practitioners to deep dive into these lessons, raise challenges they were facing, and discuss strategies and insights from their own real world GenAI systems testing experience. Each roundtable was facilitated by an industry peer or  participant of the Pilot.

Our Table Captains who facilitated the discussions at the round table

This Pilot is just the beginning where we start to lay the groundwork for future norms and standards in technical testing of GenAI applications. Hopefully, making GenAI boringly predictable. 

To everyone who took part: Thank you! Your work is already creating ripple effects across the ecosystem. Together, we’re shaping a safer, more reliable AI future.

Related Events

No posts found!

Thank you for completing the form. Your submission was successful.

Preview all the questions

1

Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?

2

Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?

3

Your reasons for using AI Verify – Why did your organisation decide to use AI Verify?

4

Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?

5

Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?

6

Your thoughts on trustworthy AI – Why is demonstrating trustworthy AI important to your organisation and to any other organisations using AI systems? Would you recommend AI Verify? How does AI Verify help you demonstrate trustworthy AI?
Enter your name and email address below to download the Discussion Paper by Aicadium and IMDA.
Disclaimer: By proceeding, you agree that your information will be shared with the authors of the Discussion Paper.