Consumer Reports (CR) welcomes today’s joint hearing to examine the role that social media plays in promoting extremism and misinformation online. Current law which governs online platforms fails to provide sufficient incentives for platforms to reduce misinformation and prevent other abuses, such as artificial amplification; indeed, it even shields platforms when their own algorithms promote harmful, misleading, or inflammatory extremist content. The largest social media platforms are built to incentivize and reward highly engaging content—despite the harms such content can cause—because their business models rely on, and optimize for, engagement. Engagement ultimately drives up both the amount of time spent on platforms where users can be shown advertisements and the amount of data that platforms can collect to more specifically target those ads. Yet high online engagement metrics have come at the cost of accelerating the spread of harmful, misleading, radicalizing content in the information ecosystem — where, lacking sufficient circuit-breaking context and curation, this content may self-reinforce in ways that keep engagement up — regardless of veracity. Platform-facilitated misinformation has contributed to the rapid proliferation of dangerous conspiracy theories that have led not only to anti-vaccination and anti-mask sentiment, but also those which led to the violence at the Capitol on January 6. It poses a significant risk to consumer health and to the public sphere. Social media platforms must be sufficiently incentivized to mitigate the harms that their businesses currently enable, amplify, and profit from. But they must also be held accountable for the product design and business decisions that have enabled and incentivized the scale of harm done to the information ecosystem.
For the full letter, please see the PDF linked above.