OpenAI defends fair use in response to New York Times lawsuit over AI-generated content

Reading time icon 2 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

OpenAI fired back today against The New York Times’ copyright infringement lawsuit, claiming fair use for training AI models on publicly available content like news articles.

In a lengthy blog post, OpenAI detailed its efforts to collaborate with news organizations, citing partnerships with the Associated Press, Axel Springer, and NYU. They emphasize their intent to support journalism by assisting with tasks, providing training data, and enabling real-time content display with attribution in ChatGPT.

The main issue in the disagreement is related to the fact that OpenAI used copyrighted material to train their AI models. OpenAI defends that publicly available materials, such as news articles, are allowed under the legal doctrine of fair use. This policy allows fair use of copyrighted works for criticism, commentary, and news reporting.

Acknowledging rare instances of “regurgitation” where models unintentionally reproduce parts of their training data, OpenAI claims they actively work to minimize such occurrences. They emphasize that users are expected to act responsibly, and manipulating models to intentionally plagiarize is prohibited.

We support journalism, partner with news organizations, and believe The New York Times lawsuit is without merit.

OpenAI alleges The New York Times refused to share specific examples of regurgitation despite requests and suspects intentional manipulation of prompts by the newspaper to trigger the issue. And despite maintaining that the lawsuit lacks merit, OpenAI expresses hope for a future partnership with The New York Times.

More here.

More about the topics: lawsuit, openAI

Leave a Reply

Your email address will not be published. Required fields are marked *