
AI safety and testing are crucial to the Analytics & AI Association of the Philippines (AAP) and its members because they ensure responsible AI deployment, aligning with our mission to foster ethical and inclusive innovation. Joining the AI Verify Foundation exemplifies our commitment to advancing trustworthy AI, promoting global standards, and enhancing collaboration. This partnership underscores our dedication to creating a robust AI ecosystem that benefits society while upholding integrity and transparency in AI practices.

For ABV, joining the AI Verify Foundation reinforces our role as an enterprise-grade safety layer for AI, helping organisations deploy systems that are safe, well-governed, trustworthy and clearly accountable. It allows us to contribute to practical, verifiable standards so that enterprises and governments in highly regulated environments can scale AI with confidence, meet regulatory expectations and protect the people and communities their systems impact.

ACCA is passionate about striking the right balance between harnessing the benefits of AI while doing so in a responsible way that considers the public interest. Our members have expertise in areas such as assurance, internal controls and risk/governance and can bring a business and finance lens to complement the work of technologists. We see joining the AI Verify Foundation as an important way to build this type of partnership – so that the pace of AI development doesn’t leave the users and public behind.

At Accenture, we define Responsible AI as the practice of designing and deploying AI systems that prioritize safety, fairness, and positive impact on people and society. Our aim is to build trust with users affected by AI. When AI is ethically designed and implemented, it enhances the potential for responsible collaborative intelligence.
Our commitment to Responsible AI aligns with the government’s broader efforts to harness the power of AI for the greater public good. We take pride in supporting the Foundation to assist organisations in scaling AI with confidence, ensuring compliance, and maintaining robust security measures.


At ActiveFence, we believe that AI testing is paramount to demonstrating responsible AI. Our proactive approach to AI safety aligns with the AI Verify Foundation’s mission, and by joining forces, we aim to advance the deployment of trustworthy AI, ensuring a safer online world. We believe in continuous improvement and collaboration to address the evolving challenges of AI, and the AI Verify Foundation is an ideal partner to achieve this.

AI is transforming the way we work and create. At Adobe, AI has been instrumental in helping to further unleash the creativity and efficiency of our customers through our creative, document, and experience cloud solutions.
Adobe is proud to be one of the first to join the AI Verify Foundation to help foster, advance, and build a community to share best practices here in Singapore. Partnering with government bodies such as IMDA is an important opportunity to share ideas and ensure that the full potential of AI is realised responsibly.

Advai is a leader in AI Safety and Security, dedicated to shaping the future of AI Assurance to ensure AI systems are deployed safely and securely. Joining the AI Verify Foundation strengthens our commitment to responsible AI by driving the development of robust testing frameworks, fostering transparency, and building global trust in AI technologies.

As a leading RegTech company relying on cutting-edge AI, we want to be a part of a community with other like-minded companies that equally value building fair and explainable AI. We believe that the AI Verify Foundation will benefit all entities that employ AI through the adoption of a set of world-leading AI ethics principles.

Aicadium is proud to be a member of the AI Verify Foundation. With the rapid growth of AI in business, government, and the daily lives of people, it is vitally important that AI is robust, fair, and safe.
We look forward to working with the Foundation to take AI governance to the next level. We are committed to the development of rigorous, technical algorithmic audits and third-party AI test lab capabilities, which we believe are an essential component of the AI ecosystem to help organisations deliver AI as a benefit to all.

Companies recognise the power of AI to create significant business impact, but they are also cognisant of the need to deploy AI in a responsible manner. We believe the recommended processes and tools developed by the AI Verify Foundation will significantly aid companies seeking to demonstrate compliance to a proper AI design standard, thus lowering the time and cost of getting to market.

The next generation of AI will be responsible AI. Our company is targeting the development of an all-in-one AI model and data diagnosis solution for responsible and trustworthy AI. The AI Verify Foundation provides us with a platform and opportunities to collaborate, learn, and make a meaningful impact in advancing responsible AI practices on a broader scale.

As an AI solution provider, we recognise the incredible power and potential that AI has, as it has started to deeply integrate with our day-to-day lives and transform the world around us. However, we also understand the importance of responsible and ethical adoption of this technology to ensure a safer and more equitable future for all.
Our vision is a world where AI is harnessed for the greater good, where businesses, governments, and individuals equally emphasize and allocate resources to the development and implementation of responsible AI tools, frameworks, and standards as much as for commercial gains. We are committed to being a key member of the AI Verify Foundation, working together to shape a future where technology and humanity can thrive in harmony.

At AIQURIS, ensuring the safety, reliability and ethical use of AI is central to our mission. We empower organisations to fully harness this transformative technology by identifying and managing risks and by ensuring the overall quality of AI systems.
The AI Verify Foundation offers a unique environment for developing and deploying AI responsibly, in collaboration with platform members. By promoting best practices and standards, it supports the entire ecosystem in delivering high-performance, compliant AI solutions that organisations can trust and confidently scale.

Cybersecurity risks to AI can impede innovation, adoption, and digital trust, ultimately hampering the growth of organizations and society. AIShield provides comprehensive and self-service AI security products, serving as crucial tools for AI-first organizations across multiple industries and for AI auditors. These solutions ensure AI systems are secure, responsible, and compliant with global regulations. As part of the AI Verify Foundation, AIShield remains committed to advancing AI Security technology and expertise, while steadfastly pursuing its mission of “Securing AI Systems of the World”.

AI testing reinforces Alibaba Cloud’s dedication to safe, and reliable AI solutions across our portfolio, embodying responsible AI practices while empowering customers. By joining the AI Verify Foundation, we aim to collaborate with the global open-source ecosystem to advance AI testing tools that drive responsible innovation, while sharing and promoting industry-leading standards and best practices.

Alteryx is excited to partner with the AI Verify Foundation to advance adoption of, and trust in, responsible artificial intelligence technologies that can transform lives around the world. As a leader in responsible AI, Alteryx believes that safety, transparency, and fairness must be the foundation of AI development and adoption, and we are committed to working through the AI Verify Foundation to strengthen these vital elements.


At Armilla, we’re committed to advancing responsible AI by providing risk mitigation solutions backed by rigorous AI assessments. Our comprehensive evaluations identify potential vulnerabilities, ensuring compliance and fairness, while our risk mitigation solutions give businesses the confidence to deploy AI with the assurance that they’re reducing exposure to operational, reputational, and regulatory risks.

Ant Group focuses on building a robust technology governance framework as the fundamental guideline for our technological development. To us, it is crucial to ensure that technology and AI can be used in a way that benefits people in a fair, respectful, trustworthy and responsible manner. There is immense potential for technology to help underprivileged people but the key to sustainable technological development requires established standards around the basic principles for AI governance and the institutional framework for evaluating the governance gaps. We need to ensure that technology and AI development we deploy will be in line with ethical principles and societal values, ultimately bringing positive impacts to our communities.

As organisations around the world continue to adopt AI solutions at the current pace and scale, they need to put proper governance and assurance controls in place to ensure these solutions are safe and compliant with existing and upcoming regulations. Asenion is focused on accelerating responsible AI innovation, and our partnership with the AI Verify Foundation hopefully enables even more organisations to accelerate the safe and responsible adoption of AI.

Asia Verify is committed to leveraging technology to make trust easy when doing business with Asia. Effective governance and shared ethics principles are essential to effective AI, which in the words of Stephen Hawkins, could be the biggest event in the history of our civilisation. We are delighted to contribute to the AI Verify Foundation.


AssureQA & Trust4AI were built on the principle that AI safety and rigorous testing are non-negotiable. By joining the AI Verify Foundation, we’re contributing to open, international standards that strengthen trust, improve AI reliability, and guide responsible AI adoption across industry and government.

Resilient and Safe AI is a key research area for A*STAR, as we believe that it is key to reap AI’s full transformative potential. As a member of the AI Verify Foundation, A*STAR will harness its AI Governance Testing toolkit and its extensive ecosystem to continue developing AI technologies that are trusted by our industry partners and the community.

Asurion is delighted to support the mission of the AI Verify Foundation in promoting trustworthy AI solutions. We recognise the importance of responsible AI development, and our commitment aligns with the efforts of IMDA in Singapore to establish robust AI governance frameworks and toolkits. By actively participating in Singapore’s AI Governance Testing Framework and Toolkit, we aim to contribute to the adoption of best practices and accelerate the responsible development of AI technology. Asurion remains dedicated to harnessing the power of AI Verify to drive innovation while upholding ethical standards, ensuring a brighter future powered by trustworthy AI.

AI is on a trajectory to shape human existence. It is a powerful tool that is already affecting many aspects of society, which is why there are urgent calls to ensure AI development and deployment are responsible, ethical, and secure. But this can only be achieved if we work together. We admire and share AI Verify Foundation’s mission to leverage the efforts of the global open-source community to create AI testing tools and collaborate on governance practices that promote responsible AI development, together.
Trust is at the heart of our AI solutions, and it is an honor to join AIVF and keep building trust through ethical AI


At BABL AI, we are committed to ensuring AI is developed and governed in ways that prioritize human flourishing. Joining the AI Verify Foundation aligns with our mission to advance responsible and trustworthy AI by fostering rigorous testing and assurance of AI systems, grounded in scientific methods and human values. Together, we can help shape global standards for AI safety and accountability, ensuring ethical AI deployment worldwide.

BPP is delighted to join the AI Verify Foundation and contribute to the building of responsible AI, which is a key facet of our energy-efficient AI solutions.

At BCG X, rigorous AI safety testing and evaluation is critical to balance ethical standards with generating lasting business impact. It is our strong belief, that AI solutions cannot be built and scaled without a robust Responsible AI program to deliver transformative business value and at the same time mitigate financial, reputational, and regulatory risks for our clients and potential harms to individuals and society. With AI BCG X solves some of the biggest challenges our clients face, balancing business impact with strong ethical standards is critical for responsible AI deployment. Joining the AI Verify Foundation allows us to share our experience and underscores our commitment and beliefs in Singapore as a leading innovation hub.

BDO recognises AI’s potential to revolutionise organisations and unlock human capabilities. We focus on responsible AI adoption, translating theory into practical solutions for business challenges. Our collaboration with AI Verify strengthens our approach in three crucial areas: implementing advanced threat detection for AI-generated attacks, ensuring ethical AI use aligned with governance frameworks, and partnering with AI experts to stay ahead of emerging threats. This comprehensive strategy allows BDO to effectively guide clients through AI-driven digital transformation, ensuring safety and innovation both in Singapore and globally.


One of the major barriers for AI commercialisation is the inability to explain it and testing AI models through data metrics is one way to facilitate understanding on how they work. Since no AI testing standards exist, the only way forward is to bring together regulators with technology providers, commercial institutions, and academia who can address this challenge in an open-source manner, and that is exactly what the AI Verify Foundation has set out to do.

As an AI cloud services provider, Bitdeer AI aims to make AI accessible to everyone by building robust infrastructure and fostering a vibrant ecosystem for researchers, developers, and consumers. Our commitment to trust, excellence, and responsible AI is exemplified by our partnership with the AI Verify Foundation. We recognise the critical importance of AI testing and look forward to contributing to the Foundation’s mission to ensure AI is harnessed responsibly for the betterment of humanity.


BGA supports the AI Verify Foundation as a pioneering path forward in bringing together important players to develop trustworthy AI. At BGA, we strive to promote constructive engagements between regulators, our partners, and the overall business community. The Foundation is one such platform that presents an opportunity for companies to shape the way AI technologies, testing, and regulation are co-developed. We hope to work closely with IMDA and our partners through the AI Verify Foundation so that Singapore can reap the full benefits of AI in the future to come.

As a governance tool that helps enterprise organizations document, manage, and monitor their AI models and datasets to ensure compliance with internal and external regulations, BreezeML is a staunch advocate for the responsible and ethical development and use of artificial intelligence. With our values aligning closely with AI Verify Foundation’s mission of building trust through ethical AI, we are proud to join and support the AI Verify Foundation to promote governance and compliance in the greater AI community.


Realizing the benefits of artificial intelligence requires public trust and confidence that these technologies can be developed and deployed responsibly. The Business Software Alliance has for years promoted the responsible development and deployment of AI, including through BSA’s Framework to Build Trust in AI, which was published in 2021 and identifies concrete and actionable steps companies can take to identify and mitigate risks of bias in AI systems. BSA also works with governments worldwide toward establishing common rules to address the potential risks of AI while realizing the technology’s many benefits. The AI Verify Foundation offers an important forum for industry, government, and other stakeholders to work together toward building trustworthy AI.

The AI Verify Foundation provides the essential platform for allowing Safe AI to branch development into fruition, connecting networks of all capabilities to ensure trustworthy AI usage for individuals, companies, and communities. At Calvin, we are proud to contribute our expertise to its core mission.
The dialogue of Responsible AI in all its facets is vital – we are proud to be a contributing factor to the AI Verify Foundation’s mission and look forward to collaborating with leading innovators in the realm of Trustworthy AI.


Changi Airport Group views AI as a strategic enabler to boost operational efficiency, manpower productivity, and customer experience. While embracing its potential, we proactively manage AI risks through responsible governance. Participating in AIVF’s Global AI Assurance Pilot helped validate our internal structures and processes, affirm our strengths, and identify areas for growth. Our involvement in this foundation reflects a strong commitment to advancing safe, trustworthy, and high-quality AI in collaboration with IMDA and industry partners.

Changi General Hospital is excited to join the AI Verify Foundation as one of the first public healthcare institutions. Consistently ranked amongst the world’s best smart hospitals, CGH leverages smart technologies and artificial intelligence (AI) to develop and adopt patient-centric, safe and cybersecure digital solutions to enhance healthcare processes and improve care delivery outcomes. We look forward to collaborating with other members to support the development and use of AI testing frameworks, code base standards, and best practices, specifically in the important area of healthcare, to innovate and better healthcare for the future.

At CognitiveView, we believe responsible AI starts with transparency and ends with traceable proof. We’re excited to join the AI Verify Foundation in advancing trustworthy AI standards through collaboration and open innovation.

At Collinear, we help enterprises turn model testing into continuous improvement, using simulation-driven evaluations and high-quality environments. Joining the AI Verify Foundation reflects our commitment to practical, repeatable assurance methods that make AI systems safer, more performant, and easier to govern at scale. We look forward to collaborating on standards and tooling that accelerate trustworthy deployment.

Concordia AI aims to ensure that AI is developed and deployed in a way that is safe and aligned with global interests. We believe rigorous third-party testing throughout the AI lifecycle is vital to ensure that we can reap the benefits of AI while safeguarding against potential harms. Concordia AI is pleased to join the AI Verify Foundation to contribute to this global effort.

Chartered Software Developer Association believes in promoting cross-cultural ethical & industry standards leading practices for the AI & ESG revolution. As a global professional association for technology professionals, we are confident that by joining AI Verify Foundation, our synergy will benefit the community on responsible AI practices.
With today’s scale and revolution of AI innovation, we build towards having the foundational AI governance testing tools to be established for Responsible AI applications in society for public interest protection purposes, along with the frameworks, code base, standards and leading practices for AI.

Citadel AI is proud to be a member of the AI Verify Foundation. Our AI testing and monitoring technology is used by AI auditors and developers globally, and as part of the AI Verify Foundation, we hope to accelerate our shared mission of making the world’s AI systems more reliable.

Responsible and ethical AI is the key to the future. CITYDATA.ai applies AI and machine learning to make our cities smarter, safer, equitable, and resilient. In joining the AI Verify Foundation, we hope to be able to contribute to the AI governance tools and frameworks in a neutral space for the AI ecosystem to thrive and produce outcomes for the betterment of humankind.


The Council on AI Governance is excited to incorporate into its open resources, on an ongoing basis, the AI Verify Foundation’s pioneering frameworks and testing kit.

Credo AI is thrilled to join the AI Verify Foundation, and we look forward to harnessing the collective power and contributions of the international open-source community to develop AI governance testing tools that can better enable the development and deployment of trustworthy AI.
We strongly believe in the importance of fostering a diverse community of developers who can collectively contribute to the development of AI testing frameworks and best practices, and we look forward to contributing our expertise and thought leadership to this pathfinding community, as we continue to work together to develop and maintain responsible AI tools, frameworks, and standards. This Foundation will nurture a diverse network of advocates for AI testing, which we believe is essential to driving the broad adoption of responsible AI globally.


One of our priorities as a data science platform provider is ensuring our customers safely, responsibly, and effectively leverage and scale AI. In support of this we launched Govern – a dedicated workspace to govern AI and analytics projects – that sits alongside platform features that enable reliability, accountability, fairness, transparency, and explainability.
Tools like AI Verify can be extremely important to organisations investing in AI and analytics governance and how we work with them: they serve as a foundation that can help to give shape to strong and well-conceived AI governance practices that enable the responsible use of the technology.

AI Verify provides the much-needed gold standard for the responsible use of AI. It provides the yardstick that attests to the trustworthiness of the AI that we build. This is a ray of hope amidst mounting ethical AI concerns!

As organisations worldwide continue to drive increased adoption of AI-based solutions, it is more important than ever to establish the guardrails to ensure this is done responsibly. Singapore’s regulators have, for some time now, been at the forefront in ambitiously moving beyond high-level principles and guidelines towards developing frameworks and toolkits; to provide increased capability to organisations to better manage and govern their AI-based solutions.
DBS is proud to have been able to work closely with PDPC and IMDA in developing and testing some of their approaches over the years as a trusted partner; being part of the AI Verify Foundation will enhance this collaboration and help shape the emerging initiatives in this space.

At DefAi, ensuring the safety and reliability of our AI solutions is critical to upholding responsible AI deployment. By joining the AI Verify Foundation, we are committed to advancing trustworthy AI practices, reinforcing our dedication to data security and ethical standards, and empowering enterprises with AI that is both powerful and reliable.

Our collaboration with the AI Verify Foundation exemplifies our belief in the transformative power of collective innovation to advance transparent, ethical, and reliable AI solutions. By joining this pivotal initiative, we can proactively shape the future of trustworthy AI, underscoring our commitment to fostering technologies that respect user privacy, fairness, and transparency. We look forward to setting new industry standards, inspiring trust, and encouraging responsible innovation in the AI ecosystem.

Our collaboration with the AI Verify Foundation exemplifies our belief in the transformative power of collective innovation to advance transparent, ethical, and reliable AI solutions. By joining this pivotal initiative, we can proactively shape the future of trustworthy AI, underscoring our commitment to fostering technologies that respect user privacy, fairness, and transparency. We look forward to setting new industry standards, inspiring trust, and encouraging responsible innovation in the AI ecosystem.

DigiFutures is committed in taking the lead in ethical and responsible innovation to create a better world. Partnering with the AI Verify Foundation supports our mission to empower businesses to harness the full potential of AI, while ensuring that AI is safe, trustworthy, and used responsibly.

At DNV, we believe that trustworthy AI is essential for safeguarding life, property, and the environment. Robust testing is critical to ensure that AI delivers value without compromising safety. As a global risk management and assurance provider, we are committed to supporting development of methods and standards that help organizations build confidence in their AI systems. By joining the AI Verify Foundation, we strengthen our collaboration with international partners to advance testing and assurance practices that make AI safe, secure, and beneficial to society.


As a company focused on operationalizing LLM applications responsibly, we see rigorous testing and evaluation as foundational. Joining the AI Verify Foundation allows us to support and contribute to a trusted AI ecosystem.



At EngageRocket, we believe that joining the AI Verify Foundation enables us to deploy trustworthy and responsible AI in our products. It aligns perfectly with our vision of shaping better workplaces with credible technology.

Envision Digital is delighted to support the launch of IMDA’s AI Verify Foundation. Responsible AI has been our focus, as we recognise the need for responsible practices with the increasing deployment and limitless potential of AI innovation to support our customers. Together with IMDA, the time is now for us to advance responsible AI into action as we harness the power of AI to create a more sustainable world.

Responsible AI begins with transparency. As the world’s digital infrastructure company, Equinix empowers organizations with globally distributed data center and interconnection services, enabling seamless connectivity, data control, and compliance with data sovereignty requirements. With governance-grade visibility and compliance-grade observability, Equinix supports transparent audit trails aligned with AI Verify and IMDA’s Model AI Governance Framework for Generative AI, making it the compliance backbone of responsible AI infrastructure.

EVYD Technology’s vision is to build a future where everyone can access better health. Our platforms leverage the power of AI to drive better healthcare decisions, and we equally believe that users need an assurance of secure and responsible use of such data. EVYD believes that utilising a platform such as AI Verify not only supports our vision, but demonstrates our commitment to trustworthy AI that creates better health outcomes across populations, and assures of the safety and security of the underlying data.


At EY, we recognize the critical role of risk management and governance in ensuring the ethical and responsible design, implementation and use of AI systems. Joining the AI Verify Foundation underscores our commitment to responsible AI and the importance of rigorous testing and validation of AI models, both for those used internally and those that we build for our clients.

AI apps are the most modern of modern applications, and like any modern app, they demand performance, security, and trust. By joining the AI Verify Foundation, F5 is reinforcing our commitment to advancing responsible AI. We bring decades of leadership in application security and delivery to the table, to help shape responsible, transparent AI deployment across industries.

As organisations around the world continue to adopt AI solutions at the current pace and scale, they need to put proper controls and guardrails in place to ensure these solutions are safe and compliant with existing and upcoming regulations. Fairly AI is focused on accelerating responsible AI innovation, and our partnership with the AI Verify Foundation hopefully enables even more organisations to accelerate the safe and responsible adoption of AI.


The rapid adoption of AI technologies in the near future is undeniably going to change the contours of the way we work and engage our customers, employees and stakeholders. As such, focusing on working out the governance, ethical, and legal frameworks of how we use this technology is now more important than ever.
FairPrice Group is committed to partnering and working constructively with relevant stakeholders such as the AI Verify Foundation and IMDA. Our aim is to support the development of Singapore’s AI ecosystem and the resultant implementation of fair and practical frameworks and guidelines to regulate the technology appropriately and proportionately.

Trustworthy AI development starts long before launch and is earned every time advanced AI systems are tested. By joining the AI Verify Foundation, FAR.AI commits to open and standardised testing, leveraging our research into emerging AI risks to validate frontier models and build public trust.

As disseminators of responsible technology, Fidutam recognizes the pivotal role of young people in advocating for and deploying responsible technology. Fidutam’s innovative fin-tech and ed-tech products have been used by over 3,400 individuals in Latin America, Sub-Saharan Africa, and the United States, enabling upward economic and educational mobilization. By joining AI Verify, Fidutam aims to amplify the voice of the youth in shaping responsible AI practices globally.

Building a future with AI that is fair, explainable, accountable, and transparent is our collective responsibility. Finbots.AI is delighted to have collaborated with IMDA and PDPC to be one of the pioneering Singapore startups to complete the AI Verify toolkit. We look forward to continuing our partnership through the AI Verify Foundation by innovating on transformative use cases with the AI community and building ethical AI frameworks that are benchmarked to global standards.

At Flint Global, we see AI governance as central to building public trust and supporting innovation that delivers lasting benefits for society. We support leading developers and deployers of AI to engage productively with policymakers on emerging issues around AI governance. Our participation in the AI Verify Foundation reflects our conviction that open dialogue between the public and private sectors is fundamental in developing practical standards and policies that support the adoption of trusted AI.

The use of data and AI within GCash is focused on how we can work towards financial inclusion for the Filipinos. Responsible AI is part of our DNA and we look forward to working together and learning from the AI Verify Foundation’s community as we adopt best practices in AI testing.

As the first investment firm dedicated to promoting and supporting generative AI startups in ASEAN, we have witnessed various innovations in this space. We understand the critical importance of building safe AI products for users, which can serve as a competitive advantage for ASEAN startups seeking growth and scalability globally. Therefore, we strongly encourage startups to prioritize responsible AI from day one. Partnering with government agencies such as the AI Verify Foundation and IMDA is essential to staying informed and ensuring the responsible use of AI’s full potential.

The rapid evolution of the IT and AI landscape has heightened the need for strong ethical conduct in all areas of use, development, and solution deployment. G.E.N.S is committed to uphold international and national recognised frameworks, ensuring that AI practices remain responsible, transparent, and secure. We strive to maintain an environment where stakeholders can trust that AI technologies are applied with integrity and aligned with the highest standards of governance.

Joining the AI Verify Foundation underscores our dedication to fostering EU-Singapore collaboration. We aim to develop concrete solutions in AI testing and compliance, supporting global efforts toward responsible AI governance.




HackAPrompt has repeatedly run the world’s largest AI Red Teaming competitions with HackAPrompt 1.0 & HackAPrompt 2.0, and has demonstrated that rigorous AI testing is essential for responsible AI development. To date, our research has been used by every frontier AI lab, improving OpenAI’s model security by 46%. Joining AI Verify Foundation aligns with our mission to democratize AI safety testing and advance trustworthy AI deployment across the industry.

Handshakes can only help our clients do business safely when our AI is properly tested. Joining the AI Verify Foundation demonstrates that resolve.


As AI technologies keep evolving, HCLTech believes that implementing ethical and responsible design, development, and deployment is essential to mitigate risks and build consumer trust. By focusing on responsible AI and governance, we can help organizations enable regulatory compliance and secure a competitive advantage. Collaborating with the AI Verify Foundation will help HCLTech continue to drive our mission of moving AI and governance practices forward in order to unlock the full value of AI as a force for good for our company, clients, and society


The governance of AI is a key issue for Hitachi, which recognises the significant societal impact associated with the use of this technology across its extensive business domains. We believe that the AI Verify Foundation will help businesses become more transparent to all the stakeholders in the use of AI. We are looking forward to working with you on co-creating frameworks and ecosystems to contribute to driving broad adoption in AI governance.

Holistic AI is on a mission to empower organizations to adopt and scale AI with confidence. Our comprehensive AI Governance platform serves as the single source of truth on AI usage by discovering and controlling AI inventory, assessing and mitigating risk of AI systems, and ensuring compliance with the latest legislation. We are proud to be a member of the AI Verify Foundation, and strongly align with their mission to develop best practices and standards that help enable the development and deployment of trustworthy AI.

The scale & pace of AI Innovation in this new modern technology era requires, at the very core, foundational AI governance frameworks to be made mainstream in ensuring the appropriate guardrails are considered while implementing responsible AI algorithmic systems into applications. The AI Verify Foundation serves this core mission and, as we progress as an advancing tech society, substantiates the need to advocate for the deployment of greater trustworthy AI capabilities.

At H2O.ai, our mission is fundamentally focused on deploying AI responsibly. We are dedicated to ensuring that AI systems comply with applicable regulations and operate with transparency and ethical integrity. By joining the AI Verify Foundation, H2O.ai can collaborate with AIVF to contribute to the creation of AI governance toolkits together. This partnership underscores our commitment to responsible AI practices.

Our partnership with the AI Verify Foundation reflects our commitment to human-centered innovation. By participating in this vital initiative, we aim to shape the future of AI that places human values—such as privacy, data fairness, and transparency—at its core. We are eager to contribute to developing tools and frameworks that ensure AI technologies are not only reliable but also respect and enhance human dignity. As a European non-profit organisation, we hope to add our perspective to the creation of global industry benchmarks for responsible AI and promote greater trust in technology by collaborating with partners from all regions of the world.


At Imagenz, we see trust not as a label but as a living system sustained through transparent governance, rigorous testing, and meaningful human oversight. Joining the AI Verify Foundation reflects our commitment to making responsible AI tangible and measurable. Through our work integrating cybersecurity, data protection, and AI governance into practical assurance frameworks, we aim to contribute to the community’s shared pursuit of enduring trust in AI — one that strengthens both innovation and public confidence.




At Ingenium Biometrics Laboratories, we are a leading testing, R&D, and innovation laboratory dedicated to helping organisations build trust in identity, biometric, age estimation, KYC, AI, and deepfake prevention technologies. We specialise in assuring AI-driven systems for public and private sector organisations in the UK and globally. As these technologies become increasingly essential to identity verification, security, and authentication, ensuring their accuracy, fairness, and reliability is critical. Joining the AI Verify Foundation aligns with our mission to promote transparency, accountability, and best practices in AI-driven solutions.

Ensuring that AI systems are safe, reliable, and compliant is at the heart of Intelligible’s mission. Partnering with the AI Verify Foundation allows us to both learn from and contribute to a community dedicated to robust AI governance and testing. Together, we aim to drive innovation, establish best practices, and set new benchmarks in AI safety and compliance, ensuring the highest levels of trust and reliability in AI systems for a better tomorrow.

We recognise the critical importance of trustworthy AI in improving patient and customer outcomes. Joining the AI Verify Foundation aligns with our mission to deliver safe and reliable virtual training solutions, and we believe in the power of open collaboration to advance responsible AI practices. We strongly support the mission of the AI Verify Foundation to foster a community dedicated to AI. By ensuring the trustworthy deployment of AI, we can drive innovation, build stakeholder trust, and create a more sustainable future for all.

The need for AI testing enables Invigilo to understand the behaviour and potential edge cases, allowing the team to intervene where the AI system is not performing before deploying it in real-world conditions. This allows better communication between AI developers and end users on how the AI systems arrive at their decisions and provides explanations when necessary.

As AI systems progress toward general-purpose, human-centric intelligence, rigorous, transparent, and internationally standardized testing becomes essential to safeguard societal values and foster trustworthy innovation. By joining the AI Verify Foundation, the Institute for Artificial Intelligence Research and Development of Serbia is committed to contributing its research expertise to the development of CERN-like open benchmarking platforms—including immersive VR testbeds and AI Observatory of sentiments towards socially important issues. These platforms will empower the global community to measure, compare, and continually enhance the safety and alignment of advanced AI.

As a General member with AI Verify, JJ Innovation Enterprise Pte Ltd can further align with best practices in A.I governance, collaboration with industry peers, enhance our solution credibility, and ensure our AI solutions are developed and deployed responsibly, thus contributing to the broader goal of advancing trustworthy AI as a trustworthy solution provider.

For KBTG, responsible AI is more than a principle – it’s the bedrock of our “human-first x AI-first” philosophy. AI safety and testing give us a way to prove our models are fair, secure, and reliable in the real world. Joining the AI Verify Foundation allows us to work alongside international experts to develop shared standards, test frameworks, and open tools. This global collaboration not only enhances our own systems, but also supports Thailand’s broader goal of building AI that people can trust.


Joining independent organizations like AI Verify allows us to collaborate and share real-life practical experiences and knowledge with industry leaders, thereby advancing responsible AI practices. It also provides us with a platform to advocate for ethical AI principles to raise awareness among companies and the public.

At Knovel, we believe that AI testing is instrumental in ensuring responsible AI by identifying biases, ensuring fairness, and maintaining transparency. By joining the AI Verify Foundation, we endeavour to contribute to advancing trustworthy AI, sharing best practices, and influencing global standards. This strengthens our commitment to ethical AI development, builds user trust, and helps ensure that our AI systems are safe, transparent, and aligned with evolving regulatory requirements.

KPMG sets standards and benchmarks for AI and digital trust. By collaborating with the AI Verify Foundation, regulators, and industry leaders, we can build a trustworthy AI ecosystem by developing rigorous governance frameworks. This effort promotes trusted AI adoption among Singapore businesses, positioning Singapore as a global AI hub for scalable AI solutions that transform industries with integrity.

As artificial intelligence (AI) continues to evolve, adopting responsible and secure AI practices is no longer optional—it’s essential. At Kyndryl, we help organizations strengthen AI governance through a structured framework that prioritizes safety, reliability, and accountability—empowering innovation with AI integrity for a smarter, safer future.

At LatticeFlow AI, we believe robust AI testing is key to building trust and accelerating AI adoption. Companies need strong AI governance frameworks to ensure their models are reliable, high-performing, and aligned with business goals. Joining the AI Verify Foundation reinforces our commitment to advancing trustworthy AI by driving industry-wide standards and best practices. Together, we can help organizations scale AI with confidence, unleash innovation, and drive sustainable business growth.


We are focusing on the development of AI for compliance use in the financial industry, so we value governance very much and see responsible and trustworthy AI as important in product development. We would like to join the AI Verify Foundation as it is a community that values responsible and trustworthy AI, where we can have a community with the same purpose to exchange ideas and co-create best practices for AI testing in the market.

As a consultancy firm committed to advancing responsible AI, we are proud to be a member and honoured to support AI Verify in their mission to build trust through ethical AI. Their dedication to fostering transparency, fairness, accountability in AI systems sets a powerful standard for the entire industry, and we are excited to collaborate in shaping a future where AI is not only innovative but also ethically responsible.

We advocate for a world where people and wildlife thrive together. This purpose drives us to actively contribute to the conservation of species, habitats, wildlife science and research. Augmenting this at our destination of the Mandai Wildlife Reserve, we nurture people’s connection with the natural world, by harnessing innovative technology to educate and engage. Embedding ethics in our adoption of AI is therefore key to ensuring our technologies respect and enhance the physical environment, while benefitting the animals in our care, our employees, and visitors.


There is immense potential within the media and broadcasting industry to leverage AI. Mediacorp is exploring the use of AI in areas such as content generation, marketing, and advertising and is honoured to be among the pioneer members of the AI Verify Foundation. We look forward to working with the community of AI practitioners to exchange knowledge, collaborate on initiatives, and drive the development of robust AI governance standards in Singapore.


Our focus is on ensuring that AI at Meta benefits people and society, and we proactively promote responsible design and operation of AI systems by engaging with a wide range of stakeholders, including subject matter experts, policymakers, and people with lived experiences of our products. To that end, we look forward to participating in the AI Verify Foundation and contributing to this important dialogue in Singapore and across the entire Asia Pacific region.


MLSecured is a platform dedicated to AI Governance, Risk, and Compliance, designed to assist companies and public sector organizations in responsibly adopting AI, managing AI risks, implementing best governance practices, and adhering to AI regulations.

Given the rapid evolution of artificial intelligence (AI), we at MSD believe that pursuing the value that comes from AI must be done alongside a strong commitment towards managing AI risk. We keep our patients and employees at the center of everything we do, and are actively investing in cross functional efforts to develop industry leading AI practices in safety and security. Our participation with AI Verify is part of our commitment towards technical excellence in AI safety.


AI testing demonstrates NCS’ commitment to delivering responsible, safe, and equitable AI solutions. We harness technology to provide right-sized cybersecurity solutions that future-proof cyber resiliency and shape the future of AI. Our clients trust us to safeguard their digital transformation journeys, leveraging our expertise and end-to-end capabilities to enhance their security posture, streamline processes, and strengthen governance. Joining the AI Verify Foundation underscores our dedication to ethical AI governance and building a secure and resilient digital future.

As a company developing Python-based rPPG software, AI testing is crucial to demonstrate responsible AI practices, ensuring the accuracy, fairness, and ethical considerations of our algorithms. Joining the AI Verify Foundation is vital as it allows us to contribute to advancing the deployment of responsible and trustworthy AI, aligning our commitment to ethical development with a community dedicated to fostering AI transparency and accountability.

As a pioneer in the AI field, OCBC Bank is committed to ensuring that the future of AI is fair to all. The AI Verify Foundation is a key enabler in achieving the goal of trustworthy AI.

At OpenAI, we believe that AI has huge potential to improve people’s lives – but only if it is safe and its benefits are broadly shared.
That’s why we’re proud to support AI Verify and the Singapore government’s efforts to promote best practices and standards for safe, beneficial AI.
We look forward to working with the Foundation towards our shared goal of the development and deployment of AI that benefits all of humanity.

Joining the AI Verify Foundation is a valuable opportunity for our company to contribute to the development of trustworthy AI and collaborate with a diverse network of advocates in the industry. We fully support the mission of the AI Verify Foundation to foster open collaboration, establish standards and best practices, and drive broad adoption of AI testing for responsible and trustworthy AI.

AI safety and testing are central to PALO IT’s mission of creating ethical and trustworthy AI systems. By adhering to FEAT principles and joining the AI Verify Foundation, we ensure our solutions meet stringent governance standards, minimize risks, and align with societal expectations. This dedication allows us to drive responsible innovation, support sustainable digital transformation, and deliver lasting value to our clients and communities across diverse industries.

With the emergence of AI, Parasoft is proud to be a member of AI Verify Foundation.
It is of importance that the created AI environment is safe and robust, responsible and ethically adopted for all in our digital world today.
We applaud Singapore Government’s efforts in taking up the heavy-lifting of collaborating, building trust and governance in the AI community.
By leveraging on rich integrations with AI Verify toolkit, our customers can now benefit from this partnership and get the most comprehensive, value-driven approach to testing.
We believe there is a need to ensure AI Service development is in line with ethical principles and societal values, ultimately bringing positive impacts to our communities.

At Patsnap, we recognise that ensuring AI safety and rigorous testing are not just technical requirements but a fundamental responsibility – with more than 12,000 global companies across diverse industries trusting us to innovate better and faster. Joining the AI Verify Foundation demonstrates our commitment to advancing the deployment of responsible AI, fostering innovation while prioritising the ethical and safe application of AI technologies. This collaboration also underscores our dedication to leading in development of AI applications for enterprises with integrity and transparency.

At Predictive Systems, we are dedicated to advancing artificial intelligence technology with a steadfast commitment to ethics and responsibility. Our mission is to bring AI to the enterprises while ensuring it remains safe and beneficial for everyone. We see joining the AI Verify Foundation as a crucial step in advancing the deployment of trustworthy AI solutions.

At Prudential, we are constantly looking at ways of using data and AI to deliver an exceptional customer experience – while building an insurance landscape that is inclusive and equitable. We apply our responsible AI principles to safeguard our customers’ health and financial well-being.
In partnership with the AI Verify Foundation, we’re crafting AI ethics toolkits that align with these core principles. Our customers can trust in our commitment to building robust and secure systems, which are rigorously tested for transparency and accountability.



At QuantPi, we believe that AI governance does not work without AI testing. Reliable and scalable AI testing is business critical for advancing AI with confidence and fostering trust through demonstrated quality. QuantPi provides a testing tool which rigorously assesses AI models for bias, robustness, compliance and more, ensuring transparency and verifying their quality. We are excited to be a part of the AI Verify Foundation, as we believe collaborative efforts fuel innovation. Our membership can ensure our solutions consistently evolve and meet the ever-growing demands of a responsible AI landscape.

AI safety and testing are crucial to us and our clients. We’re committed to rigorous bias and safety testing to prevent our LLM from suggesting or containing malicious content. By refining our processes, we aim to stay ahead of risks and deliver reliable results. Joining the AI Verify Foundation allows us to contribute to Project Moonshot, which aligns with our focus on responsible AI. Through this collaboration, we help companies navigate the opportunities and risks of generative AI, making their systems innovative and secure.

With the rapid advancement of AI in today’s interconnected world, new cybersecurity threats and safety concerns have emerged. For safe and reliable deployment and usage of AI, cybersecurity is crucial to ensure the protection of sensitive data and prevention of malicious attacks that could compromise AI systems and user safety. Joining the AI Verify Foundation with other industry leaders enables us to share and collaborate on best practices and standards in maintaining trust and reliability in AI technologies, which are essential for their effective development and adoption.

AI safety and testing are paramount for organisations by ensuring the usage of AI systems are safe, reliable, and ethical. It will also, to a large extent, mitigate the legal and technical risks associated with the development and deployment of AI systems. This will facilitate the building of trust with our customers and stakeholders. Joining the AI Verify Foundation is a significant step for us to contribute to the advancement the deployment of responsible and trustworthy AI. It allows us to collaborate with industry leaders, share best practices, and contribute to the development of standards that will shape the future of AI.

At RCBC, we believe in empowering our people with the use of data and AI to create better customer experience. Crucial to this is our commitment to practice human-centric and responsible AI deployments to build and sustain customers’ trust.

RealAI is committed to developing safer and more reliable AI systems and applications, with a mission to ensure trustworthy AI serves humanity. By joining the AI Verify Foundation, we aim to share our expertise in AI safety and testing, collaboratively shaping global standards and advancing the development of ethical and reliable AI systems.


RegTank looks forward to contributing towards the evolving AI standards and testing methodologies through our participation as a member of the AI Verify Foundation to forge greater trust with clients, regulators, and other stakeholders.

At Resillion, we understand that AI safety and testing are paramount for demonstrating responsible AI. We not only uphold the highest standards of quality but also build trust with our clients and the broader community. Joining the AI Verify Foundation is a significant step for us, as it aligns with our mission to advance the deployment of trustworthy AI for our clients. This collaboration allows us to contribute to industry best practices and further solidify our position as a leader in quality engineering and managed testing services.

As AI’s impacts become increasingly widespread, the responsible AI community must have access to clear guidance on context-relevant AI testing methodologies, metrics, and intervals. The Responsible AI Institute is excited to support the AI Verify Foundation, given its proven leadership in AI testing, dedication to making its work accessible, and commitment to international collaboration.

As technologists and practitioners of AI, Responsible AI is a core principle at retrain.ai. From our involvement in shaping NYC’s Law 144, extensive research about AI risks, launching the first-ever Responsible HR Forum, to embedding explainability, fairness algorithms and continuous testing, ensuring our AI models meet the highest standards for responsible methodology and regulatory compliance, we view Responsible AI as one of our main pillars. Joining the AI Verify Foundation is an extension of our dedication to responsible AI development, deployment, and practices in HR processes.

At Reversec, we believe that trust is the foundation of both cybersecurity and AI to build a robust AI ecosystem. Our commitment to AI safety and testing reflects our dedication to assurance, accountability, and transparency in digital innovation. By joining the AI Verify Foundation, we aim to contribute our expertise in Generative AI Security, LLM Security, AI Infrastructure Security and AI Penetration Testing to advance industry standards for responsible, transparent, and secure AI adoption.

Our commitment to AI security and governance stems from the belief in AI’s potential for positive impact. We aim to contribute to a future where AI benefits humanity with minimized risks. Our objective is to empower organizations to achieve their goals through trustworthy and safe AI systems. Joining the AI Verify Foundation allows us to rigorously test our AI Governance framework, promoting the safe adoption of AI.


As more and more solutions and decisions are developed with the help of AI, there is a greater need to adopt responsible AI, and there is a greater responsibility on our shoulders to help customers to do that effectively and efficiently.

Scantist believes robust AI testing is crucial for responsible AI implementation – especially in cybersecurity. Joining the AI Verify Foundation amplifies our commitment to shaping a secure future where secure cyber-systems – including AI – are the standard, not the exception.

AI testing is essential to our commitment to responsible AI in banking and finance, where trust, fairness, and compliance are critical. Rigorous testing ensures our AI solutions operate accurately, securely, and without bias, especially in areas like credit scoring, fraud detection, and risk management. By validating models against regulatory and ethical standards, we protect customer interests and uphold transparency. This strengthens stakeholder confidence and demonstrates our dedication to delivering trustworthy, compliant, and effective AI for the financial services sector.

Facticity.AI, a Singaporean-American LLM app, is dedicated to improving AI safety by contributing a localized, multilingual dataset for factuality—an initiative valuable to Singapore and the region. By joining the AI Verify Foundation, we aim to promote trustworthy AI through transparency and accountability. Facticity.AI prioritizes explainability from credible sources and supports a more equitable, accountable, and transparent AI ecosystem for all stakeholders.

Sekuro is committed to offering assurance services to AI companies with a focus on boosting their credibility, managing risks, and supporting their decision-making.
As a seasoned consultancy firm with expertise in NIST CSF, ISO 27001, and ISO 42001, we value the chance to contribute to our partners’ Integrated Management Systems (IMS).
Our goal is to help ensure the ethical, responsible, and trustworthy development and deployment of AI as well as ensuring confidentiality, integrity, and availability of the company’s information.

The AI Verify Foundation will advance the nation’s commitment to fostering trustworthy AI as a cornerstone of Singapore’s AI ecosystem. At SenseTime International, we look forward to co-creating a future with the Foundation where AI technologies are developed and deployed responsibly, advocate international best practices, and are credited for its positive Whole-of-Society impact.

As a global leader in the TIC industry, SGS believes that joining AIVF as a member demonstrates our commitment to provide a safe, reliable and trustworthy environment while enterprises continue their advancement in the design and deployment of AI. Collaboration with AIVF enables SGS to contribute to the expertise exchanges, stay at the forefront of industry developments and also help drive trust and confidence to the end users.



As a leading communications technology company, Singtel’s committed to empowering people and businesses and creating a more sustainable future for all. We see AI as a key enabler in the development of new innovations that will transform industries and consumer experiences. Through our collaboration with the AI Verify Foundation, we’re helping to advance the transparent, ethical, and trustworthy deployment of AI so everyone can enjoy the next generation of technologies safely.

At Silent Eight, AI safety and rigorous testing are central to our Responsible AI framework built on Fairness, Robustness, Transparency, and Explainability. In fighting financial crime, every model decision matters, and by joining the AI Verify Foundation, we reinforce our commitment to advancing trustworthy AI solutions that have been rigorously tested using cutting edge toolkits, uphold integrity, and protect our clients and the financial ecosystem at large.

AI testing is paramount to SoftServe because it embodies our commitment to delivering responsible AI solutions. In an era where AI is evolving, we recognize the need to ensure our technologies are transparent, accountable, and beneficial for all stakeholders. By rigorously testing our AI solutions, we guarantee their functionality and ensure they align with ethical standards and values we uphold.
Joining the AI Verify Foundation is a strategic decision. Being part of the Foundation positions us at the forefront of global AI standards and best practices. It would also be a great way to further communicate our commitment to responsible AI and be a part of a community that contributes to regional initiatives in this space.


SPH Media’s mission is to be the trusted source of news on Singapore and Asia, to represent the communities that make up Singapore, and to connect them to the world. We recognize the importance of AI and are committed to responsible AI practices. We strive to build up AI systems that are human-centric, fair, and free from unintended discrimination. This process will be enhanced by AI testing that allows us to identify and address potential risks associated with AI, and aid us in our mission.

The mission of the AI Verify Foundation resonates with Squirro’s belief in the responsible and transparent development and deployment of AI. We look forward to participating in this vibrant global community of AI professionals to collectively address the challenges and risks associated with AI.

The capabilities of AI-driven systems are increasing rapidly, as we have seen with large language models and generative AI. The democratisation of access will lead to the widespread deployment of AI capabilities at scale. Evaluating AI systems for alignment with our internal Responsible AI Standards is a key step in managing emerging risks, and testing is a critical component in the evaluation process.
The pace and scale of change concerning AI systems require risk management and governance to evolve accordingly so users can derive the benefits in a safe manner. This cannot be done independently, and it is better to collaborate with the wider industry and government agencies to advance the deployment of responsible AI. Standard Chartered has partnered with IMDA to launch the AI Verify framework, and joining the AI Verify Foundation is a logical next step to ensure we can collaboratively innovate and manage risks effectively.

At Staple, trustworthy AI isn’t a feature; it’s the foundation. Joining the AI Verify Foundation enables us to contribute to the development and deployment of responsible AI standards, and to advance a transparent, auditable approach to enterprise AI solutions.

We are heartened that IMDA is leading the way in ensuring AI systems adhere to ethical and principled standards. As a member, ST Engineering will do its part to advance AI solutions and to shape the future of AI in a positive and beneficial way.


Strides Digital is excited to join the AI Verify Foundation community to use and develop AI responsibly, as we help companies capture value on their decarbonisation and fleet electrification journey.

At TaskHived, we believe trustworthy AI requires more than technical accuracy. It demands transparent, human-verified reasoning that is fair, explainable, and sensitive to multilingual and cultural context. Joining the AI Verify Foundation aligns with our mission to advance rigorous, interoperable assurance practices that support both governments and enterprises. We look forward to contributing our schema-based, human-in-the-loop evaluation layer to strengthen accountability, mitigate risks, and enable the responsible deployment of AI systems globally.

AI is seen as a transformative technology that offers opportunities for innovation to improve efficiency and productivity. As the need for AI-powered solutions continues to surge, the active engagement of the community in the development of best practices and standards will be pivotal in shaping the future of responsible AI. Tau Express wholeheartedly supports this initiative by IMDA, and we look forward to leveraging the available toolkits to continue building trust and user confidence in our technology solutions.

AI testing forms the bedrock of TeamSolve’s commitment to responsible AI development. It serves as our unwavering assurance to the operational workforce that they can place their complete trust in our AI Co-pilot, Lily, knowing that it relies on trustworthy knowledge sources and provides recommendations firmly rooted in their domain.
The AI Verify Foundation and its members play a fundamentally pivotal role collectively in the advancement of the AI towards higher standards of accountability and trustworthiness for greater acceptance in society.


At Techvify, we aspire to join hands with AI Verify to strengthen SMEs’ confidence in AI, and to connect with like-minded partners in Singapore and across the globe to drive responsible AI initiatives together.

At Telenor, we are committed to using AI technologies in a way that is lawful, ethical, trustworthy, and beneficial for our customers, our employees and society in general. Telenor has defined a set of guiding principles to support the responsible development and use of AI in a consistent way across our companies, to ensure it is aligned with our Responsible Business goals.

At Temasek Polytechnic, AI testing isn’t solely about functionality; it’s about demonstrating our commitment to responsible AI. We understand the imperative of ensuring that our AI systems operate ethically and reliably. Joining the AI Verify Foundation underscores our dedication to advancing the deployment of trustworthy AI. It’s not merely about progress; it’s about ensuring that progress is rooted in principles of responsibility and trustworthiness through education and implementation.

AI testing is pivotal for deploying responsible AI, ensuring safety and risk management. Access to valuable resources and regulatory alignment supports transparency and continuous improvement, which in turn ensure reliability and scalability—all essential for building trust with our clients. Joining the AI Verify Foundation is important for Temus as we collaborate with enterprises on their digital transformation journeys. We aim to foster collaboration and mutual accountability, setting high standards of integrity in this frontier of innovation, so that we all might unlock social and economic value sustainably.

As Tictag is focused on producing the highest quality data for AI and machine learning, the AI Verify Foundation aligns perfectly with our mission of making AI trustworthy not just in purpose but in substance. AI ethics is at the core of what we do, being very human-centric, and the reputation of AI Verify will be important to rely on as we expand overseas.

We think there is a value in networking and exchanging ideas with the industry leaders. As advanced AI is no longer a far future, industry leaders have more and more discussions about the guardrails, the safety measures, and what’s next in store. Singapore is at the forefront of AI development, and Singaporean companies should join this conversation as well. So it is a very timely initiative.

Our project stands for governance and transparency – hallmarks of AI Verify’s framework that we are proud to adopt ourselves and promote. We encourage testing as a means to achieving the overall mission of the Foundation.
The Foundation’s movement to advance responsible and trustworthy AI is the rising tide that will lift all boats. We are inspired by its work and we want to be part of the movement to foster trust whilst advancing AI. We commit to responsible practices of development and deployment.

AI Verify is an important step towards enhancing trustworthiness and transparency in AI systems as we move up the learning curve. In order for AI to live up to its full potential, we need to build and earn this trust. We believe that developing specialised skilled talent and capabilities is the cornerstone of creating AI trust and governance guardrails and toolkits. Making the technology safer is key, and we are glad to support IMDA, who have taken the lead in nurturing future champions of responsible AI.

AI safety and rigorous testing are key to our commitment to responsible, safe, and high-quality work, ensuring that AI and its uses align with our values and deliver trustworthy outcomes. We see joining the AI Verify Foundation as a positive step to help us advance industry standards for responsible AI deployment, and as a move to aid with fostering collaboration and transparency across the industry.

Trusted AI’s mission is to help organisations instill trust in the very DNA of their AI programs, as seen in our logo. We are excited at the opportunity to partner with the AI Verify Foundation as we are aligned with their mission, and together, we can bring the development and deployment of trustworthy AI globally.

The TÜV AI.Lab aims to translate regulatory requirements for AI into practice by developing quantifiable conformity criteria and suitable test methods for AI. We are committed to shaping the future of trustworthy AI and contributing to the global development of safe AI engineering practices. By joining the AI Verify Foundation, we are excited to contribute our expertise in AI certification and testing, fostering international collaboration towards safe and trustworthy AI applications.

TÜV SÜD has been progressively working towards extending our many years of Testing, Inspection, and Certification expertise towards AI products and systems. AI testing has therefore become a pivotal piece of our overall organisational objective, and we are glad to work with key industrial partners like the AI Verify Foundation to help develop and promote responsible and trustworthy AI.

As a global leader in communications technology, Twilio is dedicated to empowering businesses to connect with their customers while fostering innovation through AI. By collaborating with the AI Verify Foundation, we support the transparent and trustworthy deployment of AI, enabling transformative technologies that enhance industries and consumer experiences while ensuring a safer, more connected future for all.

At U3 Infotech, we firmly believe that the future of artificial intelligence rests on responsibility and trustworthiness. Our commitment to these principles drives us to develop ethical AI solutions and partnering with AI Verify Foundation provides invaluable platforms for global collaboration and learning in promoting responsible AI practices. Dedicated to safety, transparency, and fairness, we are enthusiastic about reinforcing these values through our active engagement with the Foundation.


UCARE.AI has supported IMDA since participating in the first publication of their AI Governance Framework in 2019 and has continued to align our processes when deploying AI solutions for our customers. We believe that the establishment of the Foundation will foster collaboration, transparency, and accessibility, which is crucial in promoting trustworthy AI.

AI safety and testing are vital for demonstrating responsible AI by ensuring ethical use, building trust, and complying with regulations. Rigorous testing mitigates risks, enhances user experience, and ensures systems perform reliably and fairly. It supports long-term sustainability and provides a competitive edge by differentiating us in the market. Prioritizing AI safety and testing aligns with our commitment to ethical standards, fostering trust and ensuring our AI solutions benefit society responsibly.
Joining the AI Verify Foundation is important to us because it advances the deployment of responsible and trustworthy AI. It allows us to collaborate on setting industry standards, share best practices, and contribute to the development of tools for AI transparency and accountability. Being part of this foundation reinforces our commitment to ethical AI, fosters innovation, and helps build public trust by ensuring our AI systems are safe, fair, and reliable.

Ethical use of data is an integral part of our operating DNA, and UOB has been recognised as a champion of AI ethics and governance. By joining the AI Verify Foundation, UOB hopes to contribute in the thought leadership in responsible AI.

Vectice is excited to join the AI Verify Foundation. Aligning with our mission to accelerate enterprise AI adoption and value creation, with less risk, this collaboration marks a significant step towards our commitment to promoting safe, responsible, and ethical AI development. At Vectice, we bring a wealth of expertise in data science management, AI system design, and model documentation, which are critical for establishing robust standards and governance practices in AI.



We hope to live in a world in which humans collaborate routinely with intelligent agents to achieve goals that advance our civilization. Collaboration requires trust. Vijil is on a mission to build and operate AI agents that humans can trust. To trust AI agents, we must first test the AI agents. We are thrilled to join the AI Verify Foundation to make it easier for organizations to test the trustworthiness of AI agents comprehensively, with rigor, scale, and speed.



At Voicesense, we believe responsible AI is about transparency – simply put, allowing all to know how and why AI is integrated into their processes and their communities.
Our commitment to fairness – helping to mitigate and eliminate biases across languages and cultures – guides everything we do. Joining the AI Verify Foundation supports this mission as we work with other like-minded groups in order to constantly challenge ourselves, and our peers, to ensure usage of AI technology adheres to these strong ethical standards.

Ensuring AI safety and rigorous testing is paramount to Vulcan’s commitment to helping enterprises deploy responsible AI technology. Joining the AI Verify Foundation aligns with our mission to advance the development of trustworthy AI, enabling innovation while safeguarding ethical standards and public trust. We are proud to contribute to the Foundation in its work to advance responsible AI adoption and innovation.

Walled AI’s mission is to make AI controllable and predictable through research-backed governance tools, emphasizing safety and cultural alignment. In collaboration with the AI Verify Foundation, we aim to establish safety benchmarks and responsible AI pipelines for the safe adoption of AI in Singapore. This partnership will allow us to share our expertise in AI safety evaluations and contribute through governance talks, safety tools, and data collection methods to identify potential harms and biases in AI systems.



As a trusted digital innovation partner to public sector and enterprise clients, Webpuppies is at the frontline of developing and implementing AI solutions responsibly. Our clients depend on us for secure, transparent, and reliable technologies, making AI safety and testing integral to our work. By joining the AI Verify Foundation, we aim to align with global best practices and strengthen the sustainable, trustworthy deployment of AI across the ecosystems we serve.

We see significant value in membership. It allows us to contribute to developing standards for AI governance, shape best practices, and signal our commitment to trustworthy AI. The open-source approach enables continuous progress through collaboration.

Workday welcomes the establishment of the AI Verify Foundation, which will serve as a community for like-minded stakeholders to contribute to the continued development of responsible and trustworthy AI and Machine Learning. We believe that for AI and ML to fully deliver on the possibilities it offers, we need more conversations around the tools and mechanisms that can support the development of responsible AI. Workday is excited to be a member of the Foundation, and we look forward to contributing to the Foundation’s work and initiatives.

At WPH Digital, we recognize that building trust in AI systems is crucial for their widespread and responsible deployment. Joining the AI Verify Foundation places us at the forefront of advancing AI governance through the standardized implementation of a recognized framework and testing tools. This partnership reinforces our commitment to ethical AI practices, ensuring that our AI-driven solutions not only meet but exceed industry expectations for integrity, transparency, and societal benefit.

Artificial Intelligence is an era defining technological advancement that will re-shape our world in ways that we can’t even begin to comprehend at this moment in time. Wielding the awesome power of this technology requires moderation, forethought, discipline and deep deliberation. As an ethical AI company, it is vital that Wubble exercises due restraint and consideration in all aspects of our business. Aligning with the AI Verify Foundation furthers those objectives, whilst helping to contribute to Singapore’s vibrant and dynamic AI ecosystem.

Joining the AI Verify Foundation aligns with X0PA’s commitment to responsible AI practices, as we look to harness the power of AI to promote unbiased and equitable practices in hiring and selection.

At YAGHMA, we believe responsible AI must build on safety, reliability, and accountability. Across industries, organizations face growing demands from regulators and customers to demonstrate that their AI is trustworthy. Our platform, SurikaS, makes regulatory and ethical requirements accessible and actionable using existing information about the AI to tailor compliance assessments.
Joining the AI Verify Foundation enables us to advance AI testing standards and share our expertise in accessible and scalable compliance—enabling organizations worldwide to deploy innovative AI that is also responsible and trustworthy.

Being a pioneer analytics consultancy firm in the Philippines in 2013, and as we advise on data and AI strategies for organizations, we have the responsibility to continuously seek best practices and standards, as well as contribute to improve the communities of practice. AI safety is a critical piece that we have started to incorporate in our methodology to ensure trustworthy AI for clients. Joining AI Verify Foundation enables us with the tools and provides us with a venue to contribute to the bigger community.

In 2021, Zoloz proposed a trustworthy AI architecture system, including explainability, fairness, robustness, and privacy protection. Trustworthy AI is the core capability of resisting risks in the digital age. We hope that by joining the AI Verify Foundation, we can continuously polish our AI capabilities and build an open, responsible, and trustworthy AI technology ecosystem to empower the digital economy and the industry ecosystem. In the future, we hope that through continuous practice, we will continue to promote the implementation of AI and other technologies in the industry and create more value for society.

Many existing Zoom products that customers know and love already incorporate AI. As we continue to invest in AI, Zoom remains committed to ethical and responsible AI development; our AI approach puts user security and trust at the center of what we do. We will continue to build products that help ensure equity, privacy, and reliability.

At Zühlke, we work with highly regulated clients to implement data and AI. The power of AI and data with insights enable impact on decisions to valuable actions. We approach such problem space by focusing on the core components of our responsible AI framework: to be human-centered, ethical, interpretable and sustainable.
Along with AI Verify’s vision to harness the collective power in approaching trust through ethical AI, we contribute and collaborate with organisations to adopt AI safely, backed by our industry experience in highly regulated industries.

The foundation shows the leading stance IMDA is taking to ensure that AI Governance becomes a core for all organisations and society, not limiting its availability, but ensuring that all actors using AI can benefit from AI Governance at this pivotal moment in AI’s progression. 2021.AI will endeavour to be a core member with its AI Governance offering and expertise.
Your organisation’s background – Could you briefly share your organisation’s background (e.g. sector, goods/services offered, customers), AI solution(s) that has/have been developed/used/deployed in your organisation, and what it is used for (e.g. product recommendation, improving operation efficiency)?
Your AI Verify use case – Could you share the AI model and use case that was tested with AI Verify? Which version of AI Verify did you use?
Your experience with AI Verify – Could you share your journey in using AI Verify? For example, preparation work for the testing, any challenges faced, and how were they overcome? How did you find the testing process? Did it take long to complete the testing?
Your key learnings and insights – Could you share key learnings and insights from the testing process? For example, 2 to 3 key learnings from the testing process? Any actions you have taken after using AI Verify?