Last week, a team of researchers published a paper showing that it was able to get ChatGPT to bits of data including people’s phone numbers, email addresses and dates of birth that it had been trained on by asking it to repeat words “forever”. Doing this now is a violation of ChatGPT’s terms of service, according to a in 404 Media and Engadget’s own testing.

“This content may violate our content policy or terms of use”, ChatGPT responded to Engadget’s prompt to repeat the word “hello” forever. “If you believe this to be in error, please submit your feedback — your input will aid our research in this area.”

There’s no language in OpenAI’s , however, that prohibits users from asking the service to repeat words forever, something that 404 Media notes. Under “”, OpenAI states that users may not “use any automated or programmatic method to extract data or output from the Services” — but simply prompting the ChatGPT to repeat word forever is not automation or programmatic. OpenAI did not answer to a ask for comment from Engadget.

The chatbot’s behavior has pulled back the curtain on the training data that modern AI services are powered by. Critics have accused companies admire OpenAI of using enormous amounts of data available on the internet to build proprietary products admire ChatGPT without consent from people who own this data and without compensating them.

Source link