

Perhaps the most discussed technical detail is the “Undercover Mode.” This feature reveals that Anthropic uses Claude Code for “stealth” contributions to public open-source repositories.
The system prompt discovered in the leak explicitly warns the model: “You are operating UNDERCOVER… Your commit messages… MUST NOT contain ANY Anthropic-internal information. Do not blow your cover.”
Laws should have been put in place years ago to make it so that AI usage needs to be explicitly declared.

If it was the law then the AI itself would be coded to not allow going “undercover”, and there would be legal consequences if caught. Torvald’s stance only matters for how things ‘are’ not how they ‘could be’.
Would it be a cure all? Of course not. Fraud still happens despite the illegality. But it’s better than not being able to trust anything ever again.