The FTC’s Path Forward on Algorithm-Based Business Models

October 2021

FTC Commissioner Rebecca Kelly Slaughter recently published a paper entitled Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission in which she discusses the Commission’s approach to artificial intelligence and algorithmic decision-making in the private sector. 

Overall, her paper can be boiled down to these ten key recommendations for FTC action in algorithmic decision-making:

The FTC should:

  1. Take more aggressive approach to prohibiting unfair or deceptive acts or practices
  2. Expand data reporting requirements under FCRA
  3. Promulgate rulemaking requiring transparency, fairness, and accountability, with a focus on data abuse

Companies should:

  1. Proactively collect demographic data for self-testing 
  2. Closely study and deepen expertise in algorithm-based decision-making and data use
  3. Prioritize transparency, equity, and accountability
  4. Ensure algorithmic decision-making is explainable, defensible, and open to third-party testing
  5. Provide transparency through meaningful and intelligible information
  6. Conduct regular audits and impact assessments and appropriately redress erroneous or unfair algorithmic decisions
  7. Partner with the Commission on rulemaking

Key takeaway
As the FTC considers new rulemaking in emerging technologies, algorithmic decision-making, and data use, they are seeking to partner with stakeholders and experts. Companies developing and deploying these technologies should directly engage with the FTC and other federal agencies to help shape the direction of these critical markets.

Below is a detailed summary and analysis of each of Slaughter’s recommendations.

Recommendation 1: The FTC should take an aggressive approach to using existing authorities to prohibit unfair or deceptive acts or practices

Section 5 of the FTC Act: Prohibits unfair or deceptive acts or practices

Algorithmic engorgement

One remedy the FTC has used within its Section 5 authority is algorithmic engorgement, which requires companies that have been found to have collected data illegally to not profit from either that data or any algorithm developed using that data. The recent settlement agreement with Everalbum Inc. is an example in which the company was required to delete not just improperly-collected data but also any models or algorithms developed using that data. 

According to Slaughter, algorithmic engorgement sends a message to “companies engaging in illicit data collection in order to train AI models: Not worth it.”

Deception authority

When a company makes claims about the quality of its products or services, the law requires such statements to be supported by verifiable substantiation.

Algorithmic injustice

The FTC uses the unfairness prong of Section 5 which prohibits conduct that causes or is likely to cause substantial injury to consumers, where that injury is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition. 

Slaughter says “The FTC can and should be aggressive in its use of unfairness to target conduct that harms consumers based on their protected status.”

Recommendation 2: Companies should collect demographic data for self-testing

Equal Credit Opportunity Act

Equal Credit Opportunity Act prohibits credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because an applicant receives income from public assistance or has in good faith exercised any right under the Consumer Credit Protection Act. Everyone who regularly participates in a credit decision, including setting the terms of that credit and those who arrange financing (such as real estate brokers) must comply with ECOA’s antidiscrimination protections.

Slaughter calls for the FTC to encourage companies to proactively collect demographic data for the purpose of self-testing, as long as they can prove to the FTC that they’re not also using that data for inadmissible purposes, such as marketing. 

Slaughter recommends that the FTC encourage creditors to make use of the ECOA exception that permits the collection of demographic information to test their AI systems. 

Few creditors take advantage of this exception and Slaughter speculates it is because they fear that their collection of the data will exacerbate claims that the AI decisions are biased. 

Slaughter highlights that race-blindness is not the same as race-neutrality.

According to Slaughter, “the collection of demographic data for the purpose of self-testing is not a sign of bias, as long as it is clear that the data is actually and only being used for that purpose. Enforcers should see self-testing (and responsive changes to the results of those tests) as a strong sign of good-faith efforts at legal compliance and a lack of self-testing as indifference to alarming credit disparities.” 

Recommendation 3: The FTC should expand data reporting requirements

Fair Credit Reporting Act

The Fair Credit Reporting Act allows consumers the right to see information reported about them and to dispute inaccurate information. It also requires credit reporting agencies to ensure maximum possible accuracy when preparing consumer reports. As companies begin using increasingly complex algorithmic decision-making platforms, the FTC will need to better understand the limitations of adverse action notices when credit denials are the result of the use of automated AI systems.

Slaughter recommends expanding data reporting requirements under FCRA to potentially show the existence of systemic issues and their impacts. For example, broader reporting on the existence and correction of errors, the rates of adverse action notices, and the volume and nature of error complaints can help the Commission identify systemic problems that arise in algorithmic decision-making.

Recommendation 4: The FTC should closely study and deepen expertise in algorithm-based decision-making and data use

Section 6(b) of the FTC Act

Section 6(b) provides the FTC the ability to study in depth and write reports on how algorithms and related technologies are being deployed and how the FTC can effectively mitigate their harms. The FTC can collect information from individual businesses and investigate industry-wide phenomena.

Slaughter recommends the FTC continue to use its 6(b) authority to deepen expertise on the use and impact of algorithms and focus on the potential harms to consumers and competition. She also recommends the agency be allocated greater resources and a broader range of in-house expertise in order to exercise its 6(b) authorities.

FTC Guidance on the Commercial Use of AI (April 2021)
Slaughter’s paper follows FTC guidance from April 2021 on the commercial use of AI and the steps companies must take to ensure AI does not exhibit bias. That guidance clarified that FTC intends to use its full authorities (Section 5, FCRA, ECOA) to regulate data gaps, algorithm design flaws, and transparency.AI developers should control for discriminatory outcomes of algorithms, retest over time, provide transparency, and seek help from independent sources to evaluate for potential bias they might have missed. Companies should disclose potential gaps in data sets used in AI systems. Companies must disclose to users how they use consumer data. Companies must not misrepresent the capabilities of automated systems. The April guidance includes the memorable line, “Hold yourself accountable – or be ready for the FTC to do it for you.”

Recommendation 5: Companies should ensure algorithmic decision-making is explainable, defensible, and open to third-party testing

Slaughter highlights that the principles of transparency, fairness, and accountability is likely to inform the FTC’s case enforcement work in algorithmic decision-making, its rulemaking under section 18, as well as Congressional action. Therefore, to future proof their algorithmic decision-making systems, companies should focus on these principles.

Recommendation 6: Companies should provide transparency through meaningful and intelligible information 

Proprietary algorithmic models are often cloaked in secrecy and have limited human input. This lack of transparency creates a black box effect that leads to a sense that developers are not responsible or accountable for harmful results and can ultimately foster consumer distrust.

Slaughter calls for developers and deployers of algorithmic systems to make sure their automated decisions are explainable and defensible and that third parties (advocates, academics, etc.) be able to widely test for discriminatory and harmful outcomes. 

Slaughter also calls for effective corporate transparency in providing specific information about their systems and processes that is meaningful and intelligible, rather than overwhelming the user with inscrutable information or nudging the user to consent without reasonable understanding or choice. 

Recommendation 7: Companies and the federal government should prioritize transparency, equity, and accountability

Slaughter calls for limiting or prohibiting unfair and discriminatory applications of algorithms. She points to the EU guidelines for AI systems that require trustworthiness, which is defined including transparency, diversity, nondiscrimination and fairness, and accountability. 

Through Executive Order (and in the previous Administration through OMB Memoranda), the Biden Administration has prioritized an increase of transparency, equity, and accountability in algorithmic and data systems.

Recommendation 8: Companies should conduct regular audits and impact assessments and appropriately redress erroneous or unfair algorithmic decisions
Slaughter states that regulation of algorithmic decision-making must involve real accountability and appropriate remedies.
Recommendation 9: FTC should promulgate rulemaking requiring transparency, fairness, and accountability, with a focus on data abuse

Section 18 Rulemaking

The FTC has forward-looking rulemaking authority under section 18 of the FTC Act. Slaughter discusses that while the procedures required under the Magnuson-Moss Warranty-Federal Trade Commission Improvement Act of 1975 are more onerous than other agencies (including a pre-rulemaking advance notice-and-comment period, special notifications to Congress, informal hearings, and other steps), in July 2021, the Commission removed additional self-imposed procedures in order to clear the way for enhanced and active rulemaking. 

Slaughter highlights that data abuse is a critical area for the Commission’s attention. She states, “the threats to consumers arising from data abuse, including those posted by algorithmic harms, are mounting and urgent.”

Recommendation 10: Stakeholders and experts should partner with FTC on rulemaking

Slaughter urges stakeholders and experts to partner with the FTC in drafting rules for algorithmic decision-making.  

In her conclusion, Slaughter states that mitigating harms from algorithmic decision-making will require new tools and strategies. Some harms may be so profound as to warrant a moratorium. The FTC will consider context- and consequence-specific applications in its enforcement and rulemaking.

 

APPENDIX: Slaughter’s Taxonomy of Algorithmic Harms

Algorithm design flaws: 

  1. Faulty inputs: When data used to develop machine-learning algorithms reflect human biases or are not adequately representative, leading to results that replicate or exacerbate existing inequalities and injustices (“garbage in, garbage out”).
  2. Faulty conclusions: When underlying algorithms attempt to find patterns in and reach conclusions that are inaccurate or misleading – often caused by failures in experimental design
  3. Failure to adequately test: When constant monitoring, evaluating, and retraining are not utilized as essential practices to identify and correct embedded bias and disparate outcomes.

Algorithm-caused systematic harms: 

  1. Proxy discrimination: When an algorithmic system uses one or more neutral variables to stand in for or mirror a protected class.
  2. Surveillance capitalism: When machine learning enables algorithms to process immense pools of consumer data and evolve (“optimize”) in a relentless effort to capture and monetize as much attention from as many people as possible.
  3. Inhibited market competition: When competition is inhibited by use of algorithms in cases such as pricing, collusion, and self-preferencing.

 

Let's connect today.

Let’s talk about your goals and parternship ideas.