Washington, DC —A groundbreaking new report released by PEN America and Consumer Reports, titled Treating Online Abuse Like Spam: How Platforms Can Reduce Exposure to Abuse While Protecting Free Expression, calls on social media platforms to fundamentally rethink how they address online abuse. Drawing inspiration from how email providers manage spam, the report urges platforms to adopt a model that empowers individual users to proactively filter out and quarantine abuse so they can decide for themselves if and when they interact with it.
Online abuse is rampant on major social media platforms, affecting nearly half of Americans. Those from historically marginalized groups and public-facing professions—journalists, writers, academics, creators—are disproportionately targeted. Yet, current platform strategies remain largely reactive, requiring users to be exposed to abuse, often repeatedly, in order to mitigate it through tools such as blocking or reporting This approach is psychologically damaging and, by itself, inadequate for protecting free expression online.
“Companies are falling short—often acting too late to prevent harms associated with online harassment,” notes Yael Grauer, program manager, cybersecurity research at Consumer Reports and one of the report’s authors. “Our research makes the case for a more proactive approach to countering online abuse by handling it more like spam. In other words, companies can empower people to take control of their social media experiences by equipping them with more flexible and customizable tools to combat harassment.”
The report advocates for equipping individual users with a system that automatically detects potentially abusive content and quarantines it in a personalized dashboard. There, individual users could choose to review the flagged material, ignore it, or delegate its review to a trusted contact. Designed with trauma-informed principles, this system would reduce mental health risks and give individual users greater agency over their own online experiences.
“If social media companies are serious about protecting free speech like they claim, they must do more to protect users from abuse, which can intimidate people into self-censorship and silence,” said Viktorya Vilk, director of digital safety and free expression at PEN America and an author of the report. “We’re proposing a system that protects free expression while empowering individuals to set their own boundaries. Rather than suppressing content platform-wide, it allows individual users to decide what they see and when.”
The approach also increases transparency in content moderation, giving users visibility into what is flagged and why—making the process more accountable and customizable.
“The technology to automatically detect and quarantine abusive content, while inherently imperfect, already exists,” notes Deepak Kumar, assistant professor of computer science & engineering at the University of California San Diego and an author of the report. “In fact, social media companies are using it all the time to detect policy violations behind-the-scenes. In other words, the infrastructure is there. What’s missing is the commitment of social media platforms to prioritize user autonomy and safety.”
The report features testimony from experts in trust and safety, tech policy, and digital rights, including former industry insiders.
“Even when trust and safety [in platforms] was at its heyday, we still didn’t have the investment and the resources for folks to be able to say, we’re going to take this engineering team off of building this product that we think is going to bring in all of this revenue, so that you can think about abuse.” says Anika Navaroli, former senior policy official at Twitter (now X) and Twitch, “It doesn’t get to the bottom line.”
“Speech that is abusive and harassing towards an individual or a group on social media is abuse and harassment, as far as the targets are concerned,” says Susan McGregor, a scholar at Columbia’s Data Science Institute. “To platforms that host it, it looks like engagement, and engagement equals advertisers.”
The proposed “spam model” offers multiple benefits: reduced exposure to traumatic content, user-customizable controls, and improved data for refining moderation systems. It is not a cure-all, but it represents a critical step toward safer, more inclusive digital spaces that support open discourse.
PEN America and Consumer Reports are calling on tech companies to put their money where their mouths are and build spaces for public discourse online that are safer, more equitable, and more free.
About PEN America
PEN America stands at the intersection of literature and human rights to protect free expression in the United States and worldwide. We champion the freedom to write, recognizing the power of the word to transform the world. Learn more at pen.org.
About Consumer Reports
Founded in 1936, Consumer Reports works to create a fair and just marketplace for all. Known for rigorous testing and research, CR also advocates for consumer rights around safety, digital rights, financial fairness, and sustainability. CR is independent and nonprofit.
Contact: cyrus.rassool@consumer.org