This report, ‘Operationalising AI Safety: A Lifecycle Approach’ examines how rapidly proliferating AI systems can be governed responsibly in high‑stakes contexts, with a particular focus on the Global South. It highlights the need to balance AI’s transformative potential in areas such as scientific innovation and productivity with the responsibility to anticipate and mitigate emerging risks, especially where technological deployment intersects with infrastructural constraints, linguistic diversity, and distinct cultural realities.
The report situates AI safety as a socio-technical field that extends beyond narrow technical metrics, emphasizing standards and evaluative methodologies that align AI development with the collective interests of society. Part I surveys key conceptions and the evolution of AI safety, engaging with contemporary debates and interpretive challenges, and underscores the importance of contextualised approaches rooted in Global South experiences. Part II adopts an AI lifecycle lens to identify stage-specific risks and harms, examine corresponding governance mechanisms and legal frameworks, and set out core considerations for implementation in the Global South.
Designed as a resource for academics, technologists, civil society, and policymakers, the report seeks to inform global discussions on responsible AI deployment. This report is part of a larger project on the evolving landscape of AI Safety in the Global South, supported by Konrad Adenauer Stiftung (KAS).
Go to Operationalising AI Safety: A Lifecycle Approach