Award-winning investigative journalist Julia Angwin filed a federal class action lawsuit against Grammarly and its parent company, Superhuman, on Wednesday in the Southern District of New York, alleging the unauthorized misappropriation of hundreds of writers’ identities to power a controversial AI “Expert Review” tool.
The Legal Battle Over Virtual Personas
The lawsuit, which seeks damages exceeding $5 million, identifies Angwin—founder of the nonprofit news organization The Markup—as the primary plaintiff. The complaint alleges that Grammarly’s “Expert Review” feature exploited the names and professional reputations of high-profile figures, including Stephen King and Neil deGrasse Tyson, to act as virtual editors without their consent. According to the filing, this practice constitutes a misappropriation of identities to generate profit for Superhuman and its owner by trading on the established credibility of journalists, authors, and editors.
Superhuman Discontinues Feature Amid Public Backlash
In anticipation of the legal action and following significant public criticism, Superhuman disabled the Expert Review feature. Ailian Gan, Superhuman’s director for product management, stated that the company is currently “reimagining” the tool to provide experts with real control over their representation. Gan admitted the company “missed the mark” in its attempt to bridge the gap between thought leaders and users, offering an apology for the implementation and promising a different approach moving forward.
Violation of Commercial Likeness Laws
The legal core of the case rests on established statutes in New York and California that prohibit the commercial use of an individual’s name or likeness without explicit permission. Peter Romer-Friedman, Angwin’s attorney, characterized the case as a straightforward violation of these protections. He emphasized a growing societal concern where professionals spend decades mastering a craft only to see their skills and names appropriated by AI platforms without consent, effectively devaluing their life’s work.
Distorted Advice and Digital Doppelgängers
Angwin, who has extensively covered privacy issues and Silicon Valley’s impact on society, expressed shock at being “cloned” by the platform. She described the experience as a professional deepfake, noting that the AI-generated advice attributed to her was often counterproductive. In specific instances, the tool suggested revisions that made simple sentences more complex and harder to understand, or introduced themes irrelevant to the original text. Angwin described the AI’s performance as “scattershot” and “actively making it worse,” highlighting the reputational risk of having low-quality advice falsely attributed to veteran writers.
Corporate Defense and Future Iterations
Superhuman CEO Shishir Mehrotra dismissed the claims as “without merit,” vowing to defend the company in court. Despite this stance, Mehrotra acknowledged on LinkedIn that the company received valid feedback regarding the misrepresentation of expert voices. He maintained that the company remains committed to a version of the platform that benefits both users and experts, though the current litigation seeks to stop the company from attributing words and advice to professionals that they never actually provided.
