OpenAI faces potential pause in ChatGPT releases following FTC complaint

A public challenge could potentially halt the deployment of ChatGPT and similar AI systems. The nonprofit research organization Center for AI and Digital Policy (CAIDP) has lodged a complaint with the Federal Trade Commission (FTC), alleging that OpenAI is breaching the FTC Act by releasing large language AI models like GPT-4. According to CAIDP, this model is considered “biased, deceptive,” posing threats to privacy and public safety. Additionally, it is claimed that the model does not align with the Commission’s guidelines for AI transparency, fairness, and explainability.
The Center is urging the FTC to launch an investigation into OpenAI and put a hold on future releases of large language models until they comply with the agency’s standards. Researchers are advocating for independent reviews of GPT products and services before their launch. Moreover, CAIDP is calling for the establishment of an incident reporting system and formal standards for AI generators by the FTC.
OpenAI has been contacted for comment, but the FTC has declined to provide any statements. CAIDP President Marc Rotenberg, along with other signatories, has demanded a six-month pause in the work of OpenAI and other AI researchers to facilitate ethical discussions. Even Elon Musk, the founder of OpenAI, endorsed this initiative.
Critics of ChatGPT, Google Bard, and similar models have raised concerns about the output quality, citing inaccurate statements, hate speech, and bias. CAIDP highlights the inability of users to reproduce results. OpenAI itself acknowledges the potential of AI to “reinforce” ideas, regardless of their accuracy. While newer versions like GPT-4 are more dependable, there is a worry that people may overly rely on AI without verifying its content.
The outcome of the FTC’s handling of the complaint remains uncertain. However, any regulatory actions taken could significantly impact AI development industry-wide. Companies may encounter delays in assessments and face consequences if their models do not meet the Commission’s criteria. This increased accountability could potentially impede the rapid progress of AI development.