The True Cost of Non-Compliance

AI Governance

Discover the hidden costs of non-compliance

Back to more regulation

With the AI Act, the European Union becomes the first major legislative body to implement and enforce comprehensive regulations on Artificial Intelligence (AI). The Act categorizes AI systems into various risk levels, each with corresponding regulatory measures and legal repercussions for non-compliance. The most stringent obligations fall to developers of AI systems falling into the high risk category. In this way, regulators aim to impose costs on operators of risky AI systems to deter misuses and mitigate the potential harms to society. The true cost of non-compliance extends far beyond regulatory fines. This article defines what non compliance is and outlines the range of costs operators may face if they fail to adhere to the new regulations - some of them are more obvious than others.

Author
oxethica
Proclamation Date
Effective Date
August 2024
Link

What is Non-Compliance?

Non-compliance refers to the failure to adhere to laws, regulations, guidelines, or specifications relevant to a business or industry. These regulations can vary widely, encompassing everything from data protection laws like GDPR to industry-specific standards such as HIPAA in healthcare or SOX in finance. Non-compliance can result from intentional violations, inadvertent oversights, or inadequate systems and processes.

What is the cost of Non-Compliance?

There are three types of financial repercussions: the obvious ones are regulatory costs due to fines and penalties levied by the regulators in case of non-compliance. Second are liability risks, as for example algorithmic bias can lead to discrimination and exclusion, which opens up liability risks for the offending organisation. Third are opportunity cost due to lost business caused by reputational damage.

Regulatory Costs 

The EU AI Act establishes guidelines to navigate sanctions for non-compliance in the member states. So far, failure to comply with responsibilities or engaging with prohibited applications leads to financial penalties and fines. Historically, the EU has not been shy with prosecuting violations of regulations. For example, since 2018 the EU has collected almost five billion euros in fines for violations of the GDPR. 

According to Article 99 of the EU AI Act:

  • Non-compliance with prohibited AI practices can lead to administrative fines of up to 35,000,000 EUR or 7% of a company's global annual turnover.
  • Failing to comply with outlined obligations can result in fines of up to 15,000,000 EUR, or 3% of total annual turnover.
  • Providing incorrect, incomplete, or misleading information to notified bodies or national competent authorities may incur fines of up to 7,500,000 EUR, or 1% of total annual turnover.

EU legislators stress fair market conditions, meaning fines and sanctions will be proportionate to the size, market share, and resources of the affected operator. For example, small and medium-sized enterprises (SMEs) will be fined based on a percentage of their total annual turnover, with the lower percentage applied if the previous year's profits were lower than current earnings.

Ultimately, each member state will determine the specific financial and legal repercussions, assessed based on the severity and circumstances of the violation. Companies and organisations covered by the EU AI Act should prepare for substantial fines and potential legal battles if they fail to comply.

Summative Fines: The GDPR and the AI Act

Fines under the AI Act are summative to other regulatory fines, making it crucial for AI system providers to consider the potential for  complex compliance failures, particularly those involving data privacy violations.

For example, the AI Act overlaps with the data protection principles laid out in the General Data Protection Regulation (GDPR), which previously  played a major role in regulating the responsible processing of personal and sensitive data for AI systems. However, the GDPR’s technology-neutral approach  limits its ability  to address the specific  ethical and technological risks associated with  AI. Hence the introduction of the AI Act. In practice, it means that non-compliance with  both regulations is not only possible but will result in summative fines. 

Currently, the relationship between the GDPR and the AI Act remains ambiguous. Conflicts and vagueness will most likely have to be resolved in practice once the AI Act comes into force. It is therefore important that companies who develop and/or make use of AI systems carefully consider their role in each regulation to ensure compliance under both laws.

On the other hand, the overlap between both regulations may also lead to a reduction of compliance effort. For example, both regulations require impact assessments evaluating compliance with, for example, requirements for privacy or bias mitigation: the DPIA under article 35 of the GDPR and the FRIA under article 29 of the AI Act. Since the format of these respective assessments are not specified, it is indeed possible to harmonise the overlapping aspects and thus create standardised or modular assessment that can be used for both the AI Act and the GDPR compliance processes.

Liability Risk

Non-compliance and ethical failure of an AI system leads to considerable liability risks which not only incur regulatory fines, but may also result in considerable costs for court proceedings and legal counsel and representation. Costs stemming from liabilities may also include reparations to harmed parties such as victims of biased and discriminatory outcomes of AI or other consumer-related failures. 

The liability risk is not to be taken lightly. The commission has proposed a liability directive which aims to protect people who have been harmed by AI failure specifically. The directive addresses gaps and complexities in current legislation pertaining to risks and challenges unique to AI powered technologies. Consequently, the new liability directive will reduce the burden of proof and ultimately facilitate lawsuits against organisations and their AI systems. 

So, to put this into practical terms: if a consumer suspects a denied loan approval is due to unfair discrimination by an automated system, it is no longer required to demonstrate causality;  instead, it will be sufficient to demonstrate the likelihood that the AI system has delivered a biased prediction. The latter is feasible without having access to the details of the AI system and can be achieved by interacting with the AI system, using counterfactuals or what-if analyses, for example.   

Reputational Costs: The Hidden Costs of Non-Compliance

Non-compliance with ethical standards can lead to significant reputational risks, often resulting in financial losses. When a company fails to adhere to regulations or violates social values through its AI systems, public trust erodes. This loss of confidence from customers and investors can be damaging to a company's image and bottom line through opportunity cost and lost sales. 

For example, consider the public controversies Google was implicated in.

Case Study: Google's Image Labeling Controversy

The tech giant has faced numerous controversies related to its AI systems and handling of personal data, significantly impacting its reputation, stock value and organisational stability. Public criticism has, at times, led to employee resignations and layoffs. 

A notable incident that damaged the company’s reputation, involved Google's collaboration with the US military on AI technology for interpreting drone footage. The public outcry over this partnership, especially considering past incidents like Google's facial recognition technology misclassifying black users as "gorillas," raised serious ethical concerns about misuses, bias and discrimination. The application of biased AI technologies in military contexts could have catastrophic consequences, endangering innocent lives.

In response to the backlash, Google decided to suspend operations and research related to facial recognition technology until proper ethical and legal frameworks were established. The company even supported the EU's five-year ban on facial recognition. While Google's stock price initially suffered, the company managed to recover by taking responsibility for its failures and demonstrating a commitment to ethical considerations in its AI projects.

Investing in Robust Response Strategies

Given that AI technologies are still evolving, unforeseen incidents are likely. A company's response strategy during and before controversies is crucial for mitigating reputational damage. This involves carefully balancing the severity of the technological failure, public outrage, the costs of corrective measures, and the expected financial value of the AI system in question. An inadequate response can exacerbate reputational damage, leading to a loss of customers and investors.

Reputational damage can be particularly devastating for smaller companies that lack the large market shares, long-term stability, and diversified product portfolios of giants like Google or Amazon. Unlike these big players, smaller firms cannot rely on a broad safety net of market dominance and financial reserves to cushion the blow of a public relations crisis.

Ensure Compliance with oxethica

As we have outlined in this article, the true cost of non-compliance extends far beyond the financial penalties outlined in the AI Act. It encompasses a wide range of hidden costs, including cumulative fines, liability risks, reputational damage, loss of customer trust, and long-term financial repercussions. 

For businesses of all sizes and across all industries, understanding and mitigating these risks is essential for sustainable success. By investing in robust compliance programs, leveraging technology, and fostering a culture of compliance, companies can minimize the risks and costs associated with non-compliance. Ultimately, the cost of compliance is an investment in the company's future, protecting it from the potentially devastating consequences of non-compliance.

oxethica, a leading AI Governance Platform, empowers companies to manage their AI systems effectively and stay compliant with ethical and regulatory standards. By leveraging  oxethica’s research-proven tools, businesses can safeguard themselves from both hidden and immediate costs of non-compliance. For example, our conformity assessment tool can help you work with existing compliance documents from the GDPR to leverage in the reduction of compliance effort needed for the AI Act. Additionally, our AI inventory tool assists companies in maintaining good records of their AI systems and swiftly identifies their risk levels. Get started with oxethica today to secure a compliant and successful future for your organization.

The Cost of Non-Compliance Summary

What is an example of non-compliance?

Non-compliance refers to the failure to adhere to laws, regulations, guidelines, or specifications relevant to a business or industry. For example, the most severe case of non-compliance with the EU AI Act would involve engaging with prohibited applications such as developing or deploying AI systems aimed at manipulating human behavior to undermine free will. However, sanctionable offenses also include negligence towards the responsibilities outlined by legislation, such as providing incomplete documentation or failing to report malfunctions. 

What are the costs of non-compliance?

Non-compliance comes at a financial and reputational cost. Financial punishments can rack up damages of millions of euros. For example, engaging in prohibited AI practices may lead to a fine of up to 35.000.000 Euros. Legal liabilities incurred by regulatory breaches or customer lawsuits also expose companies to considerable costs. Moreover, unethical practices and failure to comply with regulations may threaten the trust of customers and investors in a company’s competence. An erosion of public image may then also result in revenue loss and organisational disintegration. 

What are the costs of compliance?

Compliance expenditures may impact not only AI system providers but also users, as additional investments for compliance might be transferred onto them. To ensure ethical and legal adherence to regulations, companies will need to restructure their IT and legal teams. This could involve costs for ongoing staff training on new technologies and regulations. Furthermore, companies may need to hire additional consultants, administrators, and programmers to effectively manage communication, documentation, and quality management systems. To mitigate compliance costs, providers could benefit from investing in tools like capAI, oxethica’s conformity assessment procedure, to ensure and maintain smooth regulatory compliance of their AI systems.

More regulation

The Key Risks and Dangers of Artificial Intelligence

Discover why and how Artificial Intelligence can become dangerous

What is your AI System Risk Level under the EU AI Act?

Learn about the risk classes in the EU AI Act

Who is the EU AI Act for and What are Their Obligations?

Overview: The operators addressed in the EU AI Act

The True Cost of Non-Compliance

Discover the hidden costs of non-compliance
By subscribing you agree to with our Privacy Policy and provide consent to receive updates from our company.
By subscribing you agree to with our Privacy Policy and provide consent to receive updates from our company.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.