Verta | Blog

Ready for New AI Legislation? | Verta.ai

Written by Rory King | July 28, 2022

Bill pending in Congress creates new risks, liabilities for companies using AI/ML

TL;DR: New U.S federal legislation is on the horizon that will impact every company using artificial intelligence and machine learning. Organizations leveraging AI/ML must start preparing today to meet impending regulatory requirements so they can avoid disruptions, fines, or lawsuits as new legislation comes into effect.

The Background

Lawmakers in Washington, D.C., have had AI on their radar for several years, but they appear to be a significant step closer to passing legislation that will impact every company using artificial intelligence and machine learning.

On July 20, the House Energy and Commerce Committee approved a draft bill called H.R.8152 - American Data Privacy and Protection Act (ADPPA), and the bill now heads to the full House of Representatives for an as-yet unscheduled vote.

The bill includes new requirements - and creates new liabilities - for companies that rely on AI and machine learning. Here’s a quick primer on the draft’s provisions, and some suggestions for how you can start preparing your company for ADPPA compliance.

New Reporting Requirements

The bill includes “Section 207. Civil Rights and Algorithms,” which aims to prevent algorithmic bias and discrimination against protected classes by imposing new requirements for “algorithm design evaluations” and “algorithm impact assessments.”

Companies must complete a “design evaluation” prior to deploying an algorithm in order to “evaluate the design, structure, and inputs of the algorithm, including any training data used to develop the algorithm, to reduce the risk” of potential discriminatory harms.

Companies defined in the bill as “large data holders” must also perform an annual impact assessment of any algorithms “that may cause potential harm to an individual” and that are used “to collect, process or transfer” data. (Large data holders are defined as $250 million+ in annual revenues, holding data on 5 million+ individuals or devices, or holding “sensitive” data on 200,000+ individuals or devices.)

The assessment must include a detailed description of the algorithm’s design process and methodologies, purpose and uses, possible uses outside its intended scope, the data used by the algorithm (including training data), and the outputs. Companies must also describe in detail the steps taken to mitigate potential discriminatory harms.

The bill further requires “an assessment of the necessity and proportionality of the algorithm in relation to its stated purpose, including reasons for the superiority of the algorithm over nonautomated decision-making methods.” Impacted companies would need to submit their first impact assessments within two years of the bill’s passage and annually thereafter.

Per the draft bill, companies are to use, “to the extent possible,” an external auditor or researcher (presumably a law firm, accountancy, or specialty consultant) to conduct the evaluations and assessments, and they are required to submit either/both of them to the Federal Trade Commission within 30 days of completion. (Submissions to the Commission must be unredacted, but any trade secrets may be withheld from public disclosure.)

The bill empowers both federal and state authorities to enforce its provisions, although it does not specify sanctions. However, the bill also creates a mechanism for private parties to sue for damages under the ADPPA’s provisions.

Are You Ready for the Regulators?

Timing of a House vote on the bill is to be determined, but a consensus is building in Washington that passage of the ADPPA or a similar bill is coming soon. Even in the absence of legislation at the federal level, individual states are likely to view the proposed draft as a new “floor” for requirements as they consider their own bills around privacy and AI.

These developments should spur organizations that leverage AI and machine learning to start preparing today to meet the coming regulatory requirements in order to avoid disruption to their operations, and to minimize financial or reputational risks.

Consider if your organization can answer these questions:

  • How (and where) are you documenting model experimentation and training?
  • How are you tracking your models throughout their lifecycle?
  • How are you documenting the data inputs and outputs of your algorithms?
  • How (and where) are retired models archived?
  • Would you be able to provide documentation for an algorithm evaluation or assessment?
  • How interpretable and explainable are your models?
  • How have you implemented (and documented) ethical AI practices, and who is responsible for ethical AI?
  • What controls against bias have you built into your models - and how have you documented the controls?
  • How are you tracking and documenting model drift to demonstrate that bias did not creep into your algorithms over time?
  • Could you prove that your algorithms are trustworthy and being used for their intended purpose?

The days of companies self-policing their AI/ML projects (or not policing them at all) are coming to end. The sooner that organizations understand the implications of looming regulations on their operations and take steps to meet new requirements, the less likely they will be to experience disruptions, legal penalties, or lawsuits as new legislation comes into effect.

Model lifecycle management tools from a solution provider like Verta can help organizations track and  report on how their models were created, trained, tested, deployed, monitored and managed. But organizations must take the first step of acknowledging that ethical AI policies and practices, and interpretable, trackable and explainable algorithms have become “must haves” in the current environment.

Are you ready for new regulatory requirements around AI/ML? Contact Verta to discuss your organizational practices for managing model assets across the enterprise and how an Operational AI platform can help you prepare for increased scrutiny posed by the ADPPA.