By Scott Tibbs, February 19, 2018
I sent this open letter to Facebook a week ago.
What is going on at Facebook? I shared Rick Santorum's Google Plus page on my personal Facebook profile on February 10, 2012. When I looked at my "On This Day page" I was shocked to see a notification that Facebook had removed my post because it "looks like spam" and "violates community standards." My post that violates some phantom "rule" was removed six years after the fact!
Yes, sharing a presidential candidate's profile on another social media service qualifies as "spam" and violates community standards. This is despite the fact that nowhere in Facebook's community standards is it forbidden to share a presidential candidate's profile on another social media service.
This leads to an obvious question. How many of my other posts on Facebook have been taken down because they violate some Ex Post Facto "rule" that Facebook comes up with years after it was posted? How many others have had their posts removed because of this same kind of overly aggressive moderation?
I understand that Facebook is trying to deal with the problem of "fake news" and spam links, but this approach is far too aggressive and needs to be scaled back. It seems likely that my post was flagged by a bot instead of by a human being. Bots can be programmed well, but no bot will ever have the judgment of a human being. No human being would have looked at my post and honestly thought it was spam or violated community standards - unless that person was a Leftist activist who hates Rick Santorum.
Here is the irony in your aggressive moderation for spam. I have had comments auto-rejected because they "look like spam," but those comments have been links to Snopes.com refuting hoaxes I see on my timeline. Facebook is actually making it more difficult to combat the spread of "fake news" with this new, overly aggressive approach to content moderation! You are preventing your users from exposing and refuting fake news. Is this what you really want? Are you against your users self-policing content?
Facebook is a private company, and therefore you can implement whatever rules and guidelines you wish. However, content moderation needs to be handled by human beings with a clear understanding of Facebook's rules, not bots that auto-reject any URL address, even to a reliable source.
You have already seen a significant drop in use of your service, which is bad for business no matter how you try to spin it. You will only manage to drive more users away from your service and to competing services, especially if they cannot trust that posts completely within community guidelines will not be unfairly removed. Just because you are the 500 pound gorilla now does not mean that will always be the case.
Just ask MySpace.