You don’t have to go far to find stories about the problem of hate speech on social media. The latest one I saw comes from a KFOR TV report about a truly vile fat-shaming page on Facebook. Despite people reporting this page, Facebook hasn’t removed it. According to the KFOR TV report, the only response has been an automated reply:
We reviewed the Page you reported for containing hate speech or symbols and found it doesn’t violate our Community Standards.
Yeah. Not helping there, Facebook. The report goes on to quote Facebook user Heidi Davis who says, “the only way we will be able to get rid of hate mongering is to get Facebook to define their policies further.”
I’m not sure that will help very much unless it’s also coupled with making sure all pages and posts reported for hate speech are evaluated by a human. I wonder if that’s even possible considering the volume of content created daily on Facebook.
On the other hand, that volume might mean Facebook is able to develop machine learning algorithms which are able to reliably identify hate speech and images. I’m sure Fb already does that to an extent, but beyond a certain point it’s questionable how reliable an algorithm can be.
After all, how do you distinguish from an image intended as hate speech, and a fun vacation pic of a group of women at the beach? Moreover, how do you distinguish someone merely talking about hate speech from someone being hateful? What are your decision criteria?
There’s the bigger question here too of whether massive, centralized, social networking sites are inherently a bad thing precisely because there can be no effective community moderation without either steamrolling individual expression, or the place turning into a different sort of MOOC — Massive Online Open Cage-fight.
To be clear, for the most part I think Facebook is self-moderating as most users engage with a relatively small, personally selected, group of people which enforce their own community norms. For example, if I posted something hateful, I’m sure I’d get called on it very fast, and I would definitely run the risk of being ostracized (i.e. unfriended) if I persisted in my bad behavior. And when bad behavior is the norm for a group of friends, that’s OK too as it’s usually contained to that group of friends.
It seems to me that the bigger risk among one’s personal online social network is bullying. Hate speech only really becomes a problem when it’s given the bigger megaphone of public-facing Pages and Groups. Sadly, that’s when it’s hardest to police too.
So to answer the question asked in the title of this post… Honestly, I don’t know. I don’t think there’s a technological solution, because ultimately this is a human problem. At the same time it’s nigh impossible to set social norms on a platform whose users are as numerous and diverse as Facebook. In the end the solution has got to be with each and every one of us.
Image credit: Duane Bryers. Detail of “Hilda” pin-up calendar image.