On Thursday, January 25, the Federal Trade Commission (FTC) Office of Technology hosted the FTC Tech Summit to discuss key developments in artificial intelligence (AI). The FTC brought together thought leaders from across the AI industry to consider how to foster a fair and inclusive AI market given the rapid advances in large-scale language models and generative AI. The summit included remarks from Chair Lina Khan, Commissioners Rebecca Slaughter and Alvaro Bedoya, Chief Technology Officer Stephanie Nguyen, and Directors Henry Liu and Samuel Levin of the Competition Bureau and the Consumer Protection Bureau, respectively. It was. The event also featured panel discussions on the role of chips and cloud infrastructure in AI development, the power of data in AI technologies and models, and AI consumer applications.
The summit further demonstrated the FTC’s interest in limiting the risks and harms that AI poses to consumers. The FTC is by far the most active federal regulator on AI issues. In addition to recently filing its first enforcement action against a company for using AI in a biased and unfair manner, the FTC has also filed lawsuits against companies for using AI in bias and discrimination, deceptive trade practices, and copyright protection cases. We have issued guidance warning businesses about the implications of its use. I’m working on training an AI model. Mr. Khan revealed in his remarks:[t]”AI is not legally exempt,” he said, adding that the FTC is closely monitoring how companies use AI in allegedly anticompetitive ways and deceive consumers.
This post summarizes key takeaways from the event remarks and panel discussion.
Important points
- The FTC is exploring ways to leverage existing authorities to prevent harm from AI. During the event, FTC commissioners and staff indicated that the FTC will leverage existing enforcement powers to minimize harm in the AI market. According to Slaughter, the best way to stay on top of the rapidly evolving AI market is to “leverage all of the FTC’s tools.” This includes the FTC’s aggressive use of its consumer protection powers under Section 5 of the FTC Act. For example, it would require companies that use AI models to notify consumers and require companies that use AI models trained on illegally obtained data to delete both the model and the data. Underlying data (required in the FTC’s recent settlement with Rite Aid). The FTC already allows the use of mandatory processes in private investigations of products and services that use or claim to be produced using AI. Commissioners detailed that the FTC’s inaction during the emergence of technologies such as ad tech and social media has led to many of the current harms, and that the FTC is keenly aware of these lessons.
- The FTC still closely monitors technology and its impact on consumers. Messrs. Slaughter and Bedoya explained that the FTC is prepared to exercise its Section 6(b) authority in this area, and that the commission is authorized to require entities to provide information about their business practices. did. According to the commissioners, such information would further inform the agency’s understanding of the state of AI and future rules governing AI development. In fact, on the same day as the summit, the FTC issued Section 6(b) orders against five companies, requiring them to provide information about recent investments and partnerships involving generative AI companies and cloud service providers.
- The FTC continues to focus on the potential for AI to promote discrimination and bias. The commissioners emphasized that the FTC and other regulatory agencies must remain committed to curbing the potential discriminatory harms of AI, which is central to the Biden-Harris administration’s AI policy. Bedoya explained the importance of knowing what data is used to train and develop AI systems. He highlighted the recent settlement between the FTC and Rite Aid. According to the FTC, Rite Aid used facial recognition software to identify shoplifting suspects that unfairly misidentified minorities. Levine said companies need to work to reduce the discriminatory effects of their AI tools or stop using them altogether.
- The FTC will consider separating responsibility from users. Khan said the FTC is working to pinpoint companies whose activities promote market concentration and illegal use of data. He cited the FTC’s recent raids on robocalling companies, which focused on upstream companies that enabled illegal telemarketing sales. These comments are consistent with other recent statements he has heard from the FTC that companies that manufacture and deploy AI should be held responsible for downstream consumer harm.
- The FTC is concerned about market concentration at the lower levels of the “tech stack.” Although the members did not directly address the perceived market concentration in the chip and cloud layers of the AI production “stack,” panelists, particularly the role of chip and cloud infrastructure in AI development, A panel of experts expressed the following concerns: This market concentration can hinder AI innovation and harm consumers. These base layer dominant companies may prefer vertically integrated product lines, which can lead to higher prices and lower quality. Panelists urged that to foster innovation and competition, customers need to be able to move freely between different vendors at all levels of the stack.
- Financial services regulators are also increasingly paying attention to AI regulation. Consumer Financial Protection Bureau (CFPB) attorney Atul Desai, who participated as a panelist and spoke in his personal capacity, said the CFPB has already issued two circulars that require companies that rely on complex algorithms to apply. It was announced that a specific and accurate explanation must be provided when refusing a request. He explained that the CFPB, like other agencies, is prioritizing AI-related capacity building and appears prepared to apply existing consumer finance laws if necessary.