Taking AI risks seriously: a new assessment model for the AI Act
New risk model integrates EU AI Act with IPCC's risk approach
The European Union's Artificial Intelligence Act (AIA) aims to regulate the development and deployment of AI systems in the EU, ensuring that they respect fundamental rights and values.However, assessing the risks associated with AI systems is a complex task that requires a comprehensive and adaptable approach. In a recent article, researchers propose a new risk assessment model that integrates the AIA with the risk approach arising from the Intergovernmental Panel on Climate Change(IPCC) and related literature. The proposed model enables the estimation of AI risk magnitude by considering the interaction between risk determinants, individual drivers of determinants, and multiple risk types. The four risk categories defined by the AIA (unacceptable, high, limited, and minimal) are integrated into the model, along with the IPCC's hazard, exposure, vulnerability, and response determinants. The model also includes interactionalrisk types, which refer to the risks arising from the interaction between AI systems and their environment. The proposed model offers two main contributions. First, it enhances AIA enforcement by facilitating the development of more sustainable and effective risk management measures for national regulators and AI providers. Second, it favors a granular regulation of AI systems using scenario-based risk assessment to adapt to their versatile and uncertain applications. Overall, the proposed risk assessment model provides a comprehensive and adaptable approach to assessing AI risks, enabling regulators and providers to take AI risks seriously and ensure that AI systems respect fundamental rights and values.