
Researchers hailing from ETH Zurich and the firm Anthropic have furnished evidence demonstrating the capacity of current large language models to spontaneously and broadly de-anonymize social media users. This information comes courtesy of Inc.
“Moscow Region Today”/AI-Generated Content
As detailed in a report by Simon Lehrman and Daniel Palek, the artificial intelligence scrutinizes writing patterns and stray biographical snippets from anonymous posts, cross-referencing these details with publicly accessible information on other online venues. In their studies, the neural network achieved a 100% success rate in linking 67% of users from the Hacker News forum to their actual profiles on LinkedIn, drawn from a dataset comprising 89,000 individuals.
The technique hinges on pinpointing “digital footprints”—distinct vocabulary choices, favored discussion topics, and references to everyday life details. For instance, noting a pet’s nickname or a particular area frequented for walks enables the AI to rapidly identify matches within verified accounts.
Security experts caution that standard privacy measures, such as employing pseudonyms, are no longer sufficient safeguards. Even accounting for potential AI errors or confabulations, the accuracy achieved during controlled matching tests reached as high as 90%.