Google is planning on adding a new level of security as part of a premium YouTube advertising package, in an effort to address growing brand safety concerns over ads appearing alongside offensive content.
Bloomberg Technology reports that Google plans to use human moderators and artificial intelligence software to review and flag videos that are part of Google Preferred, a group of YouTube channels that Google sells to advertisers at higher prices.
Last month, Google said it would hire 10,000 employees to monitor videos and some of these hires will reportedly be tapped for the latest initiative around premium video.
YouTube is currently dealing with the fallout after Logan Paul, one of its most popular content creators, posted a video featuring a dead body Paul found in Japan from a possible suicide.
The video was widely condemned as being tasteless, and Google said it has removed Paul’s videos from Google Preferred, per Bloomberg.
The outrage follows a tumultuous year for the leading video platform, which saw two major advertiser boycotts over brand safety.
Speaking to Bloomberg, a spokesperson for Alphabet said: “We built Google Preferred to help our customers easily reach YouTube’s most passionate audiences and we’ve seen strong traction in the last year with a record number of brands. As we said recently, we are discussing and seeking feedback from our brand partners on ways to offer them even more assurances for what they buy in the Upfronts.”