Featured research
capAI protocol
capAI offers an independent, accountable assessment procedure for AI systems, aligning with AIA regulations.
Latest research articles
Auditing large language models
Large language models (LLMs) have revolutionized artificial intelligence research, but ethical challenges persist. A new approach to auditing LLMs aims to address these concerns and promote responsible AI use.
US Algorithmic Accountability vs. The EU Artificial Intelligence Act:
A Comparative Analysis of the US Algorithmic Accountability Act and the European
The European legislation on AI
Philosopher Luciano Floridi analyzes European AI legislation, its philosophical approach, challenges in legal implementation, and the impact of the pandemic, offering insights for policymakers and researchers.
Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity
The legal and regulatory challenge posed by Generative AI and Large Language Models (LLMs) in the European Union
A unified framework of five principles for AI in society
A comprehensive ethical AI framework emphasizes principles of benevolence, non-maleficence, autonomy, justice, and explainability, ensuring positive social outcomes from AI technology.
Taking AI risks seriously: a new assessment model for the AI Act
New risk model integrates EU AI Act with IPCC's risk approach, enabling estimation of AI risk magnitude by considering interaction between risk determinants, drivers, and types.
The ethics of algorithms: Mapping the debate
Algorithmic Decision-Making: Study explores human operations delegated to algorithms, highlighting ethical implications and the need for responsible governance.