Friday, February 24, 2023

ChatGPT Is Ingesting Corporate Secrets

Noor Al-Sibai (Hacker News, Bruce Schneier):

After catching snippets of text generated by OpenAI’s powerful ChatGPT tool that looked a lot like company secrets, Amazon is now trying to head its employees off from leaking anything else to the algorithm.

According to internal Slack messages that were leaked to Insider, an Amazon lawyer told workers that they had “already seen instances” of text generated by ChatGPT that “closely” resembled internal company data.

This issue seems to have come to a head recently because Amazon staffers and other tech workers throughout the industry have begun using ChatGPT as a “coding assistant” of sorts to help them write or improve strings of code, the report notes.

Just like you once had to pay Github to keep your repositories private, perhaps ChatGPT will let you pay not to have your inputs become part of its training data.

Previously:

Update (2023-02-27): Damien Petrilli:

There is this form you can use to opt out of data training.

I did it and never got any reply. So not sure you can trust them.

It’s a Google Doc.

2 Comments RSS · Twitter · Mastodon

People still put secrets in their code? What year is this?

Broader than just “secrets”, it sounds like employees are wilfully entering proprietary work product into a free third-party app, then finding they get what they deserve. That’s a policy/training problem.

Leave a Comment