On January 7, 2025, Meta announced sweeping changes to its content moderation policies, including the end of third-party fact-checking in the U.S., and rollbacks to its hate speech policy globally that remove protections for women, people of color, trans people, and more. In the absence of data from Meta, we decided to go straight to users to assess if and how harmful content is manifesting on Meta platforms in the wake of January rollbacks.
The headline confuses me. Has Facebook ever done something else?
Historically ('10s era) they were very focused on being “family friendly” and censoring anything they considered violent, sexual, perverse, deliberately deceptive, or otherwise upsetting to your middle-aged middle-income housewife/grandma.
Post-COVID, Zuckerberg’s been increasingly blackpilled, with an eye towards shock-jock engagement bait over any kind of civil moderation, fact-checking of misinformation, or discouragement of fraud/scam posts. So a site plenty of users have historically complained about feeling cloistered and sterile in the Disney-fied sense is now a total madhouse of AI slop, bum-fights clips, drop-shipper spam, and bargain basement rumor-mongering.
We’ve gone from a space that’s Mormon-style conservative to Fight Club-coded conservative.
Ok, but there had been plenty of criticism before that, specifically how readily the algorithm funnels the user into alt-right echo chambers.
And now it’s much worse.
That was more YouTube than Facebook.
The accusations are leveled at all social media.
Different applications employ different strategies for sifting content.
Facebook (and Google) pivoting to pure engagement bait was a more recent development, closer to 2018+, and synced up with their AI obsessions.