The article discusses the lawsuit filed by The New York Times against OpenAI for copyright infringement, specifically alleging that OpenAI’s ChatGPT “recites Times content verbatim.” The lawsuit claims that GPT-4 infringes on The New York Times content, challenging OpenAI’s assertion that its use of data is transformative under the legal doctrine of fair use.
OpenAI responded to the lawsuit by alleging that The New York Times used manipulative prompting techniques to induce ChatGPT to regurgitate lengthy excerpts, and that the lawsuit is based on the misuse of ChatGPT in order to “cherry pick” examples for the lawsuit. OpenAI claims that GPT-4 is designed to not output verbatim content and that The New York Times used specific prompting techniques to break GPT-4’s guardrails in order to produce the disputed output.
OpenAI further claims that the methods used by The New York Times to generate verbatim content was a violation of allowed user activity and misuse. They emphasized their commitment to building resistance against adversarial prompt attacks and cited their response to earlier reports of ChatGPT generating verbatim responses as evidence of their commitment to respecting copyright.
In summary, OpenAI’s response to The New York Times lawsuit undermines the claims made by presenting evidence that the lawsuit is based on adversarial attacks and a misuse of ChatGPT in order to elicit verbatim responses. The response supports the idea that GPT-4 is designed to prevent verbatim output and that The New York Times intentionally manipulated prompts to generate the disputed content. OpenAI also emphasizes their commitment to building resistance against adversarial prompt attacks and reiterates their support for journalism and partnership with news organizations.
Read Full Article