RT AI TOOLKIT

Singapore Continues Efforts to Align International AI Governance Frameworks

The rapid rise of artificial intelligence (“AI“) has meant that more countries are establishing their own AI governance frameworks to encourage development while managing the associated risks. However, due to the cross-border nature of AI application, it is important to harmonise the disparate frameworks. In this regard, Singapore is continuing its efforts to align international AI governance frameworks.

 

At the ATxSummit 2025, conducted in Singapore from 28 to 29 May 2025, Singapore unveiled key insights from its Global AI Assurance Pilot (“Assurance Pilot“). These insights provided the blueprint for the world’s first Testing Starter Kit for GenAI applications (“Starter Kit“), which is now open for views.

 

  1. Assurance Pilot: To encourage safe adoption of AI in industries, the Assurance Pilot was launched in February 2025 to catalyse emerging norms and best practices around technical testing of Generative AI (“GenAI“) applications.
  1. Starter Kit: The Infocomm Media Development Authority (“IMDA“) also announced plans to develop a first of its kind Testing Starter Kit for GenAI applications, which generalises key insights from the Assurance Pilot and consultations with other practitioners to provide practical testing guidance for all businesses developing or leveraging GenAI applications. IMDA is calling for views from the industry on this Starter Kit on the testing guidance as well as recommended tests for the identified risks.

 

The Singapore Consensus on Global Artificial Intelligence (“AI”) Safety Research Priorities (“Singapore Consensus“) was published on 8 May 2025, aiming to demonstrate substantial consensus around identified important technical AI safety research domains. It is the result of the “Singapore Conference on AI: International Scientific Exchange on AI Safety”, which involved luminaries in the field of AI safety research from 11 countries. The Singapore Consensus identifies the following three broad areas of AI safety research priorities:

 

  1. Risk Assessment: Developing methods to measure the impact of AI systems, enhancing metrology to ensure that these measurements are precise and repeatable, and building enablers for third-party audits.
  1. Development: Specifying the desired behaviour, designing a system that meets the specification, and verifying that the AI system meets its specification.
  1. Control: Developing monitoring and intervention mechanisms for AI systems, extending monitoring mechanisms to the broader AI ecosystem, and societal resilience research.

 

Click on the following links for more information (available on the IMDA website at www.imda.gov.sg):