Featured research
capAI protocol
capAI offers an independent, accountable assessment procedure for AI systems, aligning with AIA regulations.
Latest research articles
Auditing large language models
Large language models (LLMs) have revolutionized artificial intelligence research, but ethical challenges persist. A new approach to auditing LLMs aims to address these concerns and promote responsible AI use.
AI ethical failures
Companies react to AI ethical crises with deflection, improvement, validation, or pre-emption strategies to manage public concerns and protect their reputation.
Ethics-based auditing to develop trustworthy AI
Unveiling the significance of ethics-based auditing for trustworthy AI, this study presents six best practices to mitigate harm and enhance user satisfaction.
AI & reputation
Unethical and opaque use of AI can lead to social norm violations and reputational damage.
Taking AI risks seriously: a new assessment model for the AI Act
New risk model integrates EU AI Act with IPCC's risk approach, enabling estimation of AI risk magnitude by considering interaction between risk determinants, drivers, and types.
A unified framework of five principles for AI in society
A comprehensive ethical AI framework emphasizes principles of benevolence, non-maleficence, autonomy, justice, and explainability, ensuring positive social outcomes from AI technology.
The ethics of algorithms: Mapping the debate
Algorithmic Decision-Making: Study explores human operations delegated to algorithms, highlighting ethical implications and the need for responsible governance.