This was originally from the Ctrl Alt-Right Delete newsletter. If you’re not currently subscribed to Ctrl Alt-Right Delete but you’d like to be, you can sign up to receive it by clicking here.
This week Facebook and YouTube both confirmed that politicians using their platforms won’t be held to the same community guidelines as the vast majority of their users. This echos a similar policy from Twitter which won’t remove content from world leaders, though Twitter has made one notable exception. But if I’m reading this correctly, Facebook and YouTube’s policies are broader than Twitter’s and I have a lot of questions. Who qualifies as a politician? Do you have to be an elected official or does this apply to candidates as well? Buzzfeed actually asked Facebook to define ‘politician’ for their purposes and Facebook declined.
How tech platforms define ‘politician’ is important. On Twitter, New York Times reporter Kevin Roose made a joke about “100,000 shadowbanned edgelords filing papers to run for school board” but he’s on the mark, especially as more far-right activists are running for office. In the UK, Stephen Yaxley Lennon AKA Tommy Robinson, who ran for EU Parliament, blames being deplatformed for his election loss, and it’s conceivable that he ran thinking that the social media companies would restore his accounts. Here in the U.S. Laura Loomer, also deplatformed from most social media, is running for Congress. If the platforms’ definition of ‘politician’ includes candidates you will likely see deplatformed figures attempt this tactic as a way to build up their social media audiences.
I continue to find the argument of newsworthiness annoying. Yes, content that politicians post on social media is often newsworthy. But much of the content in question makes news precisely because it should be a content violation. Reporters and activists take screenshots and stories get written. Removing content that violates a tech platform’s policies wouldn’t affect if media outlets cover it. But it would curb the spread, which is particularly important when the content is hate speech or incites violence. Facebook claims they’ll make an exception “where speech endangers people” but as we know already, the tech platforms’ enforcement of their own policies on hate and endangering people are inconsistent as a general rule.
At this point, I should disclose that more than a year ago I advised two tech platforms on this and similar policies, specific to hate speech and violent threats. My advice was that constituents should know if their elected representatives are posting hate speech but that the platforms also can’t allow elected officials to exploit them to amplify hateful and extremist content. I could understand the argument about newsworthiness for a one-time violation but repeat offenders should absolutely have their content removed. I still stand by this advice, largely because the politicians who make these posts tend to be repeat offenders.
Big tech continues to empower the already powerful over the vast majority of their users. By choosing the powerful over the rest of us, the tech platforms empower bullying. Everyone should have a right to a voice online, but equally important, they should have the right to be safe online. Instead of giving ‘politicians’ cover the platforms need to recenter their policies around the rest of us. Newsworthiness isn’t a legitimate excuse for the continued enabling of harassment and extremism.