Twitter follows in second place, with LinkedIn a close third, Snap in fourth and Facebook in fifth. Pinterest (sixth), Twitch (seventh), Reddit (eighth) and TikTok (ninth) were listed as “below average”.
The report analysed the platforms against 10 principles: promoting respect; protecting people; diversity; data collection and use; children’s well-being; whether they block hate speech; misinformation; how they enforce their policy; advertising transparency; and accountability.
Each social platform completed a survey with 250 questions around these points.
It added that there is also an “urgent need” for third-party verification when it comes to protecting brands from their ads appearing next to content considered to be harmful. “The industry needs to promote and use third-party verification partners more widely, so we are not at the mercy of the platforms’ lack of controls,” the report noted.
Other findings included that misinformation is a challenge for most platforms and this is therefore an opportunity for advertisers to apply pressure on them.
“While certain platforms work with many organisations to combat misinformation, others work with none at all,” the report said. “Some platforms cited their unique engagement models as reason to deprioritise fact-checking, but our desktop research shows that even minor instances can lead to unsafe ad placement for advertisers.”
Elijah Harris, global head of social at IPG Mediabrands agency Reprise, which carried out the research, added: “What this audit shows is that there is work to be done across all platforms from a media responsibility perspective and that the different platforms each need to earn their place on a brand’s marketing plan.
“The audit is a tool to hold platforms accountable for improving their media responsibility policies and enforcement, and to ensure we can track progress over time.”