You are here:

/

/

Building trust, driving innovation: AI Governance lessons from Japan and Singapore

Building trust, driving innovation: AI Governance lessons from Japan and Singapore

Japan and Singapore are carving distinct yet complementary paths in AI governance — both aimed at enabling trust and driving innovation across industries. On 10 September 2025, AIVF and Google hosted Prof Hiroki Habuka (Research Professor at the Graduate School of Law, Kyoto University, and the CEO of Smart Governance Ltd.) and Wan Sie Lee (Cluster Director, AI Governance and Safety, IMDA) at the Google office. 

Over 40+ founders, startup leaders, innovators and industry practitioners, policymakers and ecosystem builders gathered to deep dive into the following topics: 

  • Japan’s 2025 AI regulatory approach and its implications for businesses
  • Japan’s quest for AI leadership and response to global AI competition, including the “DeepSeek Shock”
  • How Singapore’s governance approach — supported by practical tools, programmes, and research— helps industry build confidence in the development and use of AI

Prof Hiroki shared his perspective on Japan’s approach:

Prof Hiroki shared his perspective on Japan’s approach:

Through the AI Promotion Act and the 2025 AI Interim Report, Japan emphasises a soft-law, innovation-first approach that encourages collaboration between government, business, academia, and citizens.

Wansie shared about​ Singapore’s approach: 

Taking a balanced and practical stance, Singapore is building a trusted AI ecosystem through technology-agnostic regulations to protect against harms, practical tools to address AI-specific issues, investment in research, and international alignment.

​These frameworks are not just about managing risks — they are about creating the confidence needed for industries to adopt trustworthy AI reliably at scale, unlocking new opportunities for innovation and growth.

Related Events

No posts found!

Thank you for completing the form. Your submission was successful.

Preview all the questions

1

Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?

2

Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?

3

Your reasons for using AI Verify – Why did your organisation decide to use AI Verify?

4

Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?

5

Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?

6

Your thoughts on trustworthy AI – Why is demonstrating trustworthy AI important to your organisation and to any other organisations using AI systems? Would you recommend AI Verify? How does AI Verify help you demonstrate trustworthy AI?
Enter your name and email address below to download the Discussion Paper by Aicadium and IMDA.
Disclaimer: By proceeding, you agree that your information will be shared with the authors of the Discussion Paper.