Facebook, YouTube and Twitter have significantly improved the amount of hate speech and racist material they flag up and remove from their websites, under pressure from the EU and others to take much greater responsibility for the content hosted on their platforms.
"Instagram has decided to join forces in the fight against illegal online hate speech and will now also apply the code of conduct," EU Justice and Consumer Affairs Commissioner Vera Jourova told reporters.
"And this morning I also received the message that Google+ is joining," she added.
Facebook, where almost half of the illegal content was to be found, according to the survey, announced last year that it would hire an additional 3,000 moderators to scour the platform for potential hateful content. It reviewed complaints in less than 24 hours in 89.3 per cent of cases, YouTube in 62.7 per cent of cases, and Twitter in 80.2 per cent of cases.
Internet giants are also facing political pressure in the US. This week, representatives from Twitter, Facebook and Google appeared before the US Congress to explain how they are tackling extremist content, including the use of bot accounts run by Isis and al-Qaeda. Twitter said it had deleted 300,000 terrorist accounts in the first six months of 2017, mostly using spam-fighting artificial intelligence tools.