FTC Probes ChatGPT-Maker OpenAI Over AI Generated Lies In 20-Page Letter
AI technology seemed to skyrocket in advancements in its abilities, as well as a surge in public interest when OpenAI's ChatGPT hit the scene. The potential the technology has shown has prompted many in the artificial intelligence community to ask for some sort of guidelines to be implemented to try and harness the power of the technology that companies are developing in a responsible manner. Now, the FTC is throwing its hat in the ring when it comes to AI, and OpenAI in particular, and whether or not the company is developing its AI responsibly.
The 20-page letter demands OpenAI's records about how it addresses risks related to AI models. In particular, the FTC is seeking how the company addresses the potential risk of its AI to "generate statements about real individuals that are false, misleading, or disparaging."
This is not OpenAI's first run into trouble with its ChatGPT software. In March of this year, Italian regulators had the company take ChatGPT offline over accusations that it violated the European Union's GDPR, which was enacted in 2018, according to Reuters. OpenAI was allowed to open its ChatGPT back up to those in Italy after it agreed to install age verification features and let European users block their information from being used.
The FTC's investigation into OpenAI's practices will surely not be the last by a regulatory agency. As the technology continues to develop at a rapid pace, more regulators and lawmakers are certain to begin their own deep dive into how the tech affects its users and the potential risks that come with it. At the time of this writing, OpenAI had not provided a response to the FTC's letter.
**Update 7/13/2023 6:20pm EST: OpenAI's Sam Altman has responded to the letter from the FTC on Twitter. He started off the tweet by saying "it is very disappointing to see the FTC's request start with a leak and does not help build trust." The full thread on Twitter can be found here.