Towards a Framework for Supporting the Ethical and Regulatory Certification of AI Systems

Fabian Kovac, Sebastian Neumaier, Timea Pahi, Torsten Priebe, Rafael Rodrigues, Dimitrios Christodoulou, Maxime Cordy, Sylvain Kubler, Ali Kordia, Georgios Pitsiladis, John Soldatos, Petros Zervoudakis

Published: 2025/9/30

Abstract

Artificial Intelligence has rapidly become a cornerstone technology, significantly influencing Europe's societal and economic landscapes. However, the proliferation of AI also raises critical ethical, legal, and regulatory challenges. The CERTAIN (Certification for Ethical and Regulatory Transparency in Artificial Intelligence) project addresses these issues by developing a comprehensive framework that integrates regulatory compliance, ethical standards, and transparency into AI systems. In this position paper, we outline the methodological steps for building the core components of this framework. Specifically, we present: (i) semantic Machine Learning Operations (MLOps) for structured AI lifecycle management, (ii) ontology-driven data lineage tracking to ensure traceability and accountability, and (iii) regulatory operations (RegOps) workflows to operationalize compliance requirements. By implementing and validating its solutions across diverse pilots, CERTAIN aims to advance regulatory compliance and to promote responsible AI innovation aligned with European standards.

Towards a Framework for Supporting the Ethical and Regulatory Certification of AI Systems | SummarXiv | SummarXiv