Verta | Blog

AI Tools Are No Exception to Consumer Protection Laws | Verta.ai

Written by Rory King | April 27, 2023

Just because a company is using AI, that’s no excuse for lawbreaking.

That was the message of a joint statement this week from four US government agencies regarding their commitment to enforce laws against discrimination and bias in automated decision-making systems.

The agencies involved include the Civil Rights Division of the United States Department of Justice, the Consumer Financial Protection Bureau (CFPB), the Federal Trade Commission, and the U.S. Equal Employment Opportunity Commission.

Their joint statement notes that automated systems, including those that use machine learning algorithms, have the potential to facilitate fair and efficient decision-making in areas like housing, credit, and employment. But they also have the potential to perpetuate existing biases and discrimination in those and other ways that impact consumers and their finances.

The agencies affirmed their commitment to enforcing existing laws that prohibit discrimination in lending, employment, and other areas, and to ensuring that automated systems are used in compliance with these laws.

For example, current CFPB guidelines confirm that federal consumer financial laws and adverse action requirements apply regardless of the technology being used to make decisions that impact consumers’ finances, according to the statement. “The fact that the technology used to make a credit decision is too complex, opaque, or new is not a defense for violating these laws.”

Where AI Can Go Wrong

The agencies’ statement outlines several potential sources of discrimination in automated systems, including:

  • Data and Datasets – due to “unrepresentative or imbalanced datasets” that incorporate historical bias.
  • Model Opacity and Access – due to the “black box” nature of many AI systems and models, which make it difficult to determine whether a decision is fair or not.
  • Design and Use – due to system developers failing to understand or account for the context in which their systems will be used.

“We already see how AI tools can turbocharge fraud and automate discrimination, and we won’t hesitate to use the full scope of our legal authorities to protect Americans from these threats,” said FTC Chair Lina M. Khan in a separate statement. “Technological advances can deliver critical innovation — but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.”

Actions for Enterprises

A study by Verta Insights, the research group within Verta, revealed that 89% of the more than 300 AI/ML executives and practitioners who participated in the research believe that AI regulations will increase over the next three years. In addition, 77% of the participants believe that AI regulations will be strictly enforced – a finding reinforced by this week’s statement from the government agencies. (Register for the on-demand webcast discussing the research results to receive a copy of the research report upon its release in May.)

Executives and stakeholders in the machine learning lifecycle should consider taking steps now to ensure that they meet not only current regulations but also the requirements of proposed laws like the American Data Privacy and Protection Act (ADPPA), the Algorithm Accountability Act, and the EU AI Act, as well as the various state- and local-level laws coming into force.

  • Review current practices: Review current practices and policies to ensure that they comply with laws and regulations that prohibit discrimination in lending, employment, and other areas. This includes assessing potential risks of bias and discrimination in machine learning models and identifying ways to mitigate those risks.
  • Diversify and document training data: Develop and test models using a diverse set of data and ensuring that the data used to train the algorithm is free from bias. This includes testing algorithms for disparate impact and regularly reviewing and updating models to ensure they remain fair and unbiased. And document all this as evidence of intent to comply with regulatory requirements.
  • Establish a single source of model truth: Leverage a model catalog to centrally store all your models, model versions, and related metadata and documentation, providing easy access to all the assets and information you need to demonstrate compliance, enabling you to standardize governance and deployment processes, and facilitating collaboration among diverse stakeholders around compliance issues.
  • Monitor and address potential biases: Monitor the use of models in production to identify and address potential biases that may emerge over time. This includes regularly auditing the data used to train the algorithm and the algorithm's output to detect instances of bias or discrimination, as well as data drift or model degradation.
  • Promote transparency and accountability: Promote transparency and accountability by ensuring you can provide clear documentation of how models are making decisions, and by establishing clear lines of responsibility and accountability for decisions made by these models. This includes providing clear disclosure to consumers about how models are used and their potential impact on decision-making.

Contact Verta to arrange a discussion of further steps your organization can take to meet AI regulatory requirements and a complimentary consultative assessment of your readiness for AI regulatory compliance.