Tech giants band together against terror and extremist content

Safya Khan-Ruf - 27 06 17

Facebook, Microsoft, Twitter and YouTube have created a joint forum to tackle terrorism and online hate. The companies will share technological solutions and best practices to remove extreme or terrorist content online.

The four firms announced on Monday that they would be stepping up their efforts in challenging terrorism. The Global Internet Forum to Counter Terrorism is the latest effort by tech giants to combat online extremism after repeated calls by governments and civic societies to do more.

“We believe that by working together, sharing the best technological and operational elements of our individual efforts, we can have a greater impact on the threat of terrorist content online,” the companies said in a statement.

Courtesy of Internet Marketing Secrets/Flickr

Internet companies have been repeatedly criticised for failing to remove extremist content and violent propaganda from their platforms. The scrutiny faced by tech firms heightened after terrorist groups used the sites to spread hateful messages and recruit.

One complication has been the global nature of the platforms where extremist content is posted. In the US, legislation prevents platform owners being held legally responsible for content unless asked to remove something by law.

However, hundreds of major brands pulled advertising from YouTube, owned by Google, after revelations that their ads were being placed next to extremist or hateful content. Several countries have also cracked down on hate speech and extremist content by threatening social media giants with fines if they fail to promptly remove offensive posts.

Tech giants have been trying to balance the need for free speech with the removal of unacceptable content. The Global Internet Forum to Counter Terrorism claims it will allow companies to share best practices for “content detection and classification techniques using machine learning” and to “define standard transparency reporting methods for terrorist content removals”.

Courtesy of Corey Harris

Internal Facebook documents obtained by The Guardian recently showed the complex rules behind internet companies’ regulation of extremist content. Moderators reportedly have to memorise the names and faces of more than 600 terrorist leaders and the leaked documents revealed Facebook identified more than 1,300 posts on the site as “credible terrorist threats” in a month.

Last December, the four tech companies also announced an information-sharing initiative: a database of digital fingerprints known as ‘hashes’ for extreme videos and images. This would allow one firm to flag a piece of content and other firms to take down the same content through a ‘hash’.

The forum will also work with counter-terrorism experts from various backgrounds, such as government and civil society groups, to engage and share learning about terrorism. The internet companies have also pledged to help smaller tech companies develop a strategy and processes to tackle extremist content online.

Finally, the four firms announced that they will host a series of learning workshops in partnership with the Swiss foundation ICT4Peace and the United Nations Counter-Terrorism Committee Executive Directorate.

SHARE THIS PAGE

Stay informed

Sign up for emails from HOPE not hate to make sure you stay up to date with the latest news, and to receive simple actions you can take to help spread HOPE.

Popular

We couldn't do it without our supporters

Fund research, counter hate and support and grow inclusive communities by donating to HOPE not hate today

I am looking for...

Search

Useful links

                   
Close Search X
Donate to HOPE not hate