Facebook now asks if every post contains “hate speech”

UPDATE: After only an hour or two, Facebook has pulled the “hate speech” icons. What gives? Did we see a planned program rolled out accidentally? Or was it a trial that quickly went wrong? Who knows—it’s Facebook, Jake.

_______

Is this Facebook’s way of policing its site? Because if it is we’re all doomed.  As of this morning, every single post by every single person, public or privately posted, contains a box at the bottom asking if the post contains “hate speech”. It makes no difference whether the post is innocuous or inflammatory. Here’s my goose post as it appeared publicly on Facebook; check out the orange notification at the bottom:

Now who is going to check “no”? (You can even do that for your own posts.) The abuse will occur when people start checking yes for political or religious posts that aren’t “hate speech” but are critical of ideologies. (I anticipate that my weekly “Jesus and Mo” posts will be flagged for hate speech.)

And, of course, Facebook gives NO definition of “hate speech”.

We don’t even know if this is an experiment that will enable Facebook to determine what is considered “hate speech.” But if they can’t do that already, crowdsourcing the criteria is about the worst way of doing it.

Go home, Facebook: you’re drunk!

47 Comments

  1. Posted May 1, 2018 at 10:23 am | Permalink

    Not in the UK!

    /@

  2. Craw
    Posted May 1, 2018 at 10:23 am | Permalink

    We keep finding ways to give the worst people more power.

  3. Posted May 1, 2018 at 10:23 am | Permalink

    I should say it contains Anatidae Love Speech!

  4. Posted May 1, 2018 at 10:24 am | Permalink

    Just looking on my own facebook, where I have just posted something, and none of the posts there, be they mine or other people have the ‘hate speech’ button on them.

    • Posted May 1, 2018 at 10:28 am | Permalink

      Are you in the U.S.?

      • moleatthecounter
        Posted May 1, 2018 at 10:37 am | Permalink

        Thomas – (and Jerry)

        Same here – I’m in the UK. That button does not appear on your post noted above, or on one I just put up.

  5. andrewilliamson
    Posted May 1, 2018 at 10:24 am | Permalink

    I think the question, added to every post as it is, displays clear contempt towards all Facebook users.

    So yes, every post now contains hate speech.

  6. bbenzon
    Posted May 1, 2018 at 10:27 am | Permalink

    Just noticed it myself. This is an open invitation for mischief.

    • bbenzon
      Posted May 1, 2018 at 11:22 am | Permalink

      And now, as Jerry notes in a headnote, the question seems to have disappeared from the site. I must have checked a dozen pages, and it’s not on any of them.

  7. TJR
    Posted May 1, 2018 at 10:31 am | Permalink

    Dear Mischief-Makers,

    here is an open goal for you to stick as many balls in as you like.

    Yours, Facebook

  8. ThyroidPlanet
    Posted May 1, 2018 at 10:33 am | Permalink

    The depths of peril that free speech faces.

    It occurred to me recently- context. It matters.

    For example, consider the simple case of a Fac3b004 page that outlines the definition of hate speech itself would be flagged for hate speech.

    But wait there’s more.

    Comic books – hate speech.

    Your favorite movie -hate speech.

    Han shooting first – hate.

    Video games – hate.

    Where will it end?

  9. Posted May 1, 2018 at 10:41 am | Permalink

    Seems like there’s really two main issues with this new Facebook “feature”.

    First, as we all know here, hate speech shouldn’t really be a thing, at least not without defining it narrowly using categories such as “evidence of a crime” or “snuff film”.

    Second, there’s the issue of the mechanism behind the Hate Speech feature. FB can’t really explain their algorithm in detail without risking immediate gaming by hackers. If we’re being generous, we might imagine that marking a post as Hate Speech gets it reviewed by a human. They also would accumulate statistics on users who abuse the feature by incorrectly marking posts. The stats could be used first to advise the user on their “hate speech” criteria, but continued abuse gets the user’s input ignored. If abuse still continues, the user is bumped from Facebook entirely.

    If they refined and declared the rules for such posts, and the mechanism worked as above, it might even work. These things mechanisms often have unintended consequences so it would have to be monitored closely and fine-tuned.

  10. Vicky Sharron
    Posted May 1, 2018 at 10:48 am | Permalink

    also happened to me. got screen shots. but it is gone now, the question box about the hate speech. and it did not happen to any of my other friends so far on facebook. i did a google search and found your blog. do you know of others it happened to as well ?

  11. Saga
    Posted May 1, 2018 at 10:50 am | Permalink

    I think only some heavier users are in the trial. But the moment I posted about it, the system seems to have switched it off. This makes it all the more creepy..!

  12. Another Tom
    Posted May 1, 2018 at 10:53 am | Permalink

    No, the abuse is going to start with people clicking yes because they think doing so is funny.

    Baby pictures, cat videos, and someone advocating for the extermination of X people are all going to be flagged as hate speech.

    I await Readers’ wildlife photos being tagged as hate speech.

  13. Posted May 1, 2018 at 10:57 am | Permalink

    I live in the US and haven’t seen that feature. I checked your timeline and saw the duck post posted twice, with only one of them having the hate speech question at the bottom. None of your other posts had that feature.

  14. Dean Reimer
    Posted May 1, 2018 at 10:58 am | Permalink

    I don’t see it in Canada either, so it’s possible this is a test rollout of a new feature to the US, or possibly to a subset of users. (Which is interesting, in that the US is one of the few countries without hate speech laws!)

    I don’t see this being a permanent feature. I suspect they are using this to train a machine learning algorithm to recognize “hate speech.” I would imagine that posts marked “Yes” will be reviewed manually before inclusion in a teaching set.

    Regardless of how this is implemented, I see more negatives than positives.

  15. Vicky Sharron
    Posted May 1, 2018 at 10:58 am | Permalink

    I saw it, screen shotted (is that even a word ? ) and posted it into my post about wtf was facebook doing asking me if my posts had hate speech, THEN when i went to my wall it was there and I did a screen shot on that and shared them both in my orig post down in the comments, then when I went back after finding your page, before it loaded, it was gone. YES THAT IS CREEPY AS HECK !!!

    • garman
      Posted May 1, 2018 at 12:07 pm | Permalink

      Maybe “screen shat”?

      • Vicky Sharron
        Posted May 1, 2018 at 3:24 pm | Permalink

        lmao…. or screen splashage (stolen from a funny video )

  16. Vicky Sharron
    Posted May 1, 2018 at 11:04 am | Permalink

    its hitting the news now, up to three so far, google list

  17. GBJames
    Posted May 1, 2018 at 11:05 am | Permalink

    I haven’t seen this. I suppose they don’t trust my judgement.

  18. Posted May 1, 2018 at 11:11 am | Permalink

    Haters gonna hate.

  19. BJ
    Posted May 1, 2018 at 12:12 pm | Permalink

    I was going to build off your Chinatown joke, but I’ll just wait for Ken to make what will surely be a better one than mine.

    Anyway, the ever-increasing authoritarianism of large internet forums — from Facebook, to Reddit, to Twitter, to every gaming and book and other hobby forum — should be terrifying. The First Amendment was enshrined in the Constitution because the Founding Fathers foresaw the desire of powerful government forces to censor ideas in the public space. Unfortunately, people back then could not have foreseen a force like the Internet becoming the public square and, since the First Amendment only applies to government action (as free speech opponents so often like to note), the true public forum will continue to become a place where only “acceptable” views are allowed to be expressed, and that acceptability will be decided by certain authoritarian people with a decidedly specific ideological perspective (read: left-to-regressive left).

  20. Michael Fisher
    Posted May 1, 2018 at 12:35 pm | Permalink

    According to some fb spokesperson it’s a recently developed, incomplete facebook internal feature not meant for users – rather it’s for fb staff to categorise posts

    A bug made it appear to users for a short time.

    • Posted May 1, 2018 at 1:30 pm | Permalink

      I think it is none of their business to categorise posts this way!

      • Michael Fisher
        Posted May 1, 2018 at 1:56 pm | Permalink

        I agree. I suppose they’re doing so to keep their business alive for longer i.e. pandering to various governments requests for data on “subversive” or “dissident” or “criminal” users & the internal activities of supposedly private ‘closed’ or ‘secret’ fb groups. [Not just fb are weighing up how much to bend to governments versus user privacy].

        • BJ
          Posted May 1, 2018 at 1:59 pm | Permalink

          They were already doing this well before the current scandals, with their censoring of posts considered insensitive, offensive, etc., and having employees censor conservative media: https://gizmodo.com/former-facebook-workers-we-routinely-suppressed-conser-1775461006

          (note that Gizmodo is a site that leans heavily left and is known to post regressive left criticisms quite often, so that report showing up on their site says something)

          • Michael Fisher
            Posted May 1, 2018 at 2:19 pm | Permalink

            Well yes. But supplying user data to governments is my main point.

            And doing next to nothing to stop Russia, China, US agencies from scraping data on industrial scales.

            And I suppose they’ll bend to the new Chinese ‘cybersecurity’ law that requires companies to store users’ data inside the country, in data centres operated in ‘partnership’ with local data management companies [Apple, IBM, Microsoft & Amazon have already agreed to this]. A Chinese dissident, or Taiwanese, would be ill advised to use an Apple phone despite the encryption!

            • Michael Fisher
              Posted May 1, 2018 at 2:23 pm | Permalink

              This local hosting of data centres is also happening in the EU – possibly worrying. Depends.

            • BJ
              Posted May 1, 2018 at 4:37 pm | Permalink

              I was responding to your saying that they were doing this to keep their business alive for longer. Just noting that this isn’t some knee-jerk reaction to bad publicity, but something they’ve been doing for years, which further suggest that it may be systemic to their corporate culture and/or willingness to bend to the demands of a certain part of the political spectrum.

  21. JonLynnHarvey
    Posted May 1, 2018 at 2:24 pm | Permalink

    Without a clear definition of hate speech, this is even more insiduous. SOME (but not all) countries with anti hate speech laws have very clear and specific definitions for it, but if it is up to individual whim, then we are at the mercy of the arbitrarily offended.

  22. infiniteimprobabilit
    Posted May 1, 2018 at 6:31 pm | Permalink

    Corporate arse-covering.

    cr

  23. Robert Covey
    Posted May 1, 2018 at 6:32 pm | Permalink

    FWIW, until recently I designed software for K-12 education environments — environments where keeping students from accessing “inappropriate content” at school and/or on school-provided equipment is generally considered acceptable censorship.

    We used a similar technique to what FB was doing this morning — in order to (or at least attempt to) identify appropriate vs inappropriate content for students using our (much smaller scale) social media platform similar to FB.

    Our software didn’t simply tally the votes of the respondents (that’s not AI), we also attempted to identify which respondents were giving intentionally contrary identification. As an example, we would intentionally plant known inappropriate content (in our K-12 environment it was very mild inappropriate content due to our audience, things like “Sports Illustrated Swimwear Issue is the Skimpiest Ever” or “KKK Meeting Draws Record Attendance”).

    Based on that we would use the user input to train our AI platform to not only identify our primary goal: 1) Is this appropriate or inappropriate content? But we also tried to identify: 2) Is this user a reliable identifier?

    The upshot is that our AI software also weighted responses by user reliability.

    Facebook is a far larger force in contemporary society by several orders of magnitude than the social media platform we designed for schools. And given our audience we could err on the side of censorship, and even then we would have to have humans correct our AI’s false positives after we received complaints.

    But the point that I wanted to make is that the Facebook designers I know using AI are not just tally the votes. That’s not AI. Please don’t consider them naive. Most of these people are pretty sophisticated and have already anticipated the sort of criticisms I see here.

  24. SusanD
    Posted May 1, 2018 at 9:34 pm | Permalink

    Oooohh! This whole thing brings to mind ‘1984’ and the Ministry of Truth with its Newspeak, doublethink and thoughtcrime, with armies of people constantly adjusting history and weeding out ‘unpersons’. As Facebook spreads its tentacles further through the ether it eventually joins with Google and takes over the world, sending all free-thinkers to Room 101.

  25. Mike
    Posted May 2, 2018 at 7:45 am | Permalink

    Hate Speech ? whats next Book burning .?

  26. Posted May 2, 2018 at 9:10 am | Permalink

    It sounds like they gathered the evidence they needed to prove the obvious: There’s no difference that anyone cares about between a post containing hate speech and a post by someone who annoys you.

    I suspect that they knew all along how the addition would go, and planned in advance to gather enough data for a statistically significant sample and then remove it, then analyze the result and keep that in reserve as evidence that it’s impractical to expect them to control how people communicate with each other.

  27. Posted May 2, 2018 at 9:37 am | Permalink

    Are you sure this is happening to everyone? You may have been reported as a hate monger by some zealous Christian.

    On Tue, May 1, 2018 at 10:20 AM, Why Evolution Is True wrote:

    > whyevolutionistrue posted: “Is this Facebook’s way of policing its site? > Because if it is we’re all doomed. As of this morning, every single post > by every single person, public or privately posted, contains a box at the > bottom asking if the post contains “hate speech”. It makes no ” >

    • Michael Fisher
      Posted May 2, 2018 at 9:42 am | Permalink

      It was a short-lived bug – read the comments & the post update at top of the post.

  28. Posted May 2, 2018 at 11:22 am | Permalink

    Bug?

    Not likely. These days a lot of the big platforms (Google for one I know for sure does it) do a lot of so-called “A/B” testing, where they change a feature for a (statistically relevant) sample of user and get quick feedback before rolling it out generally (or removing it).

    • Michael Fisher
      Posted May 2, 2018 at 11:37 am | Permalink

      It was not meant to go public
      It rolled out on everyone’s primary news feed
      Ars Technica say it was up for only a half hour & was gone by noon ET on Tuesday
      The associated pop up UI was incomplete

      So it was a glitch


%d bloggers like this: