How Do You Solve A Problem Like Hateful Content?

Laura Howarth April 13, 2017

We’ve created a monster. YouTube has over 1 billion active users enjoying around 4,950,000,000 videos every single day. We upload 300 hours of content to YouTube every minute, and enjoy a collective 3.25 billion hours of the video-sharing site’s content every month. While you might think the majority of those videos are cute kitten compilations or instructional videos for contouring, there’s a darker side to YouTube that has come to the fore in recent months.


Some of the platform’s biggest advertisers, including PepsiCo, Walmart and Starbucks, have pulled their ad spend from YouTube in recent weeks in response to pressure to boycott the platform. The problem? Google has no control over which ads are shown, which has led to big brands appearing alongside some troubling content. Further investigation from The Times revealed more specifics: adverts from the luxury holiday operator Sandals were seen alongside a video from a jihadist group, while Disney adverts were seen alongside homophobic content.

This isn’t only troubling from a reputation management perspective. The structure of the platform means that content producers upload their content, and then advertisers bid to be seen alongside this content. Unfortunately, this bidding process is based on the number of people who see the content, and there is no vetting process in place to ensure the content is a good fit. This means that the brands could be directly funding hateful and extremist content. Brands quite rightly want to know that their ad spend won’t be finding its way into the pockets of racists, anti-semites, or extremist groups, but YouTube in its current state cannot guarantee this.

Google has faced harsh criticism for not taking a stronger stance against the hateful content from the beginning. YouTube’s USP may well be its downfall; in trying to be the antithesis of the traditional media company, they have revealed why it is we rely on media companies to moderate our content. By demanding diversity and choice, we’ve been handed the full spectrum of the internet; hate speech and all.

When YouTube first came to light, there were enough rags to riches success stories to keep the content-creation dream alive. Zoella made millions from makeup videos and First Aid Kit got their recording contract after uploading a Fleet foxes cover video. If it’s alternative news coverage you’re after, there’s no shortage of “citizen journalists” willing to spout their opinions on current affairs to keep you entertained for an afternoon. While many of these might harbour some controversial standpoints, they are often tactful enough to attack ideas, rather than people. Unfortunately, not everyone has squeaky clean intentions. The current political climate and rise of populist rhetoric has prompted some to cross way over the line between freedom of speech and hate speech, and YouTube has handed them a soapbox to stand on.

In short, this is why we can’t have nice things.

Google has now been pushed to make a promise they might not even be able to keep. Drawing a line between freedom of speech and flat out hate speech should be simple enough. Google even takes a stab at this in their policy centre: “There is a fine line between what is and what is not considered to be hate speech. For instance, it is generally acceptable to criticise a nation state, but not acceptable to post malicious, hateful comments about a group of people solely based on their race.” Their policy goes on to place the onus on the user to either ignore or flag the content that they find inappropriate. Users can either block a specific user, so they don’t have to see the content in question, they can flag an individual video, or they can report an entire user based on their videos or comments.

This kind of user moderation fails for several reasons. Firstly, we’re all guilty of living in our own echo chambers, and one thing that Google’s algorithms have nailed is the ability to show us more (and more and more) of what we want to see. If you go looking for hateful content, you’ll be able to find it, and you’re unlikely to flag it as hate speech if that’s what you’re looking for. The average YouTube user is unlikely to come across anything they find troubling, so the people who will take issue with the content aren’t likely to have the opportunity to flag it. Unfortunately, this kind of self-moderation doesn’t cut it when advertisers’ money is on the line. An algorithm simply cannot distinguish between extremist propaganda and political commentary, provided that content has found its intended audience.

Despite recent promises from Google that they will take a harsh stand against hateful content, it has been noted that there is no easy fix for this problem. When we think about the sheer amount of content that would need to be moderated, it would seem that YouTube couldn’t operate at its current scale if they were to moderate every single video that is uploaded. One proposed solution is to limit where adverts are placed based on a user’s ability to show they have a long history of good behaviour, but this would alienate a huge portion of their content creators.

Instead, a revamped advertising policy which excludes “potentially objectionable” content by default should help to appease the advertisers. Unfortunately, as with the self-moderation that placed the onus of responsibility on the user, this time, the responsibility falls on advertisers to single out the channels they don’t want to appear on.

With all eyes on Google, it will be interesting to see what stance they take on hateful content. Will they step up to the plate as a media organisation and take responsibility for the content, or will they hide behind their tech company credentials and absolve themselves of any and all responsibility?

Article by Laura Howarth , Senior Search Marketing Executive.

Back to Blog