Google’s director of global media strategy and planning, David Benson, has apologised to advertisers for “not brand-safe” content that has slipped through on its platforms.
Speaking at the Future Advertising Forum in London this morning, Benson prefaced his presentation on how advertisers can optimise their TV and digital mix with “an apology and a thank-you” to its advertisers.
“It’s not acceptable that bad people can turn our platforms to their own uses and we are dealing with that,” he said. “It is difficult, but we are committed to it.”
Benson shared the stat that 87% of Google content take-downs are now actioned before a flag is raised thanks to its “extensive and massive AI capabilities.” He said this is how Google plans to “track and take down these problems before they occur”.
The comments came in the same week that YouTube CEO Susan Wojcicki said that YouTube would grow its ‘trust and safety teams’ to 10,000 people in 2018 to clamp down on content that violates its policies.
YouTube has been working hard this year to purge its site of violent and extremist videos and is its is now turning its attention to what Wojcicki described as “other problematic content”, such as inappropriate videos featuring or aimed at children.
A recent investigation by The Times newspaper flagged up videos, with ads served against them, that it claimed “exploit young children and appeal to paedophiles”.
“Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualised decisions on content,” said Wojcicki in an open letter published this week.
“Since June, our trust and safety teams have manually reviewed nearly two million videos for violent extremist content, helping train our machine-learning technology to identify similar videos in the future.”