NEW YORK • Facebook's enormous audience has long been catnip to advertisers. But the company's vast ecosystem has come under scrutiny this year from major brands, which are increasingly sensitive to the possibility of inadvertently showing up next to objectionable content.
In response to those concerns, Facebook released a new set of rules on Wednesday that outline the types of videos and articles that it will bar from running ads. It also said it would begin disclosing new information to advertisers about where their messages appear on the platform, and on external apps and sites it is partners with.
The rules, which will be enforced by a mix of automation and human review, restrict ads from content that depicts, among other topics, real-world tragedies, "debatable social issues", misappropriation of children's show characters, violence, nudity, gore, drug use and derogatory language. Facebook is extending the guidelines immediately to videos - which the company hopes will become an increasingly lucrative part of its business - and, in the coming months, to articles.
Facebook said users who repeatedly violate its content guidelines, share sensational clickbait or post fake news may lose the ability to run ads.
"There have been concerns that marketers have had that are wide-ranging around digital, and we want to do everything we can to ensure that we are providing the safest environment for publishers, advertisers and for people that utilise the platform," said Ms Carolyn Everson, Facebook's vice-president of global marketing solutions.
The new policies, which closely mimic guidelines established by Google's YouTube, come as advertisers demand more accountability from the Internet giants related to where and how their messages are delivered.
Facebook is this huge, huge, huge platform, and they haven't really been monetising original content in the same way as YouTube has.
MR JOHN MONTGOMERY, executive vice-president for brand safety at GroupM, a media investment group for the advertising giant WPP.
Facebook and Google were criticised during and after the US presidential election for allowing misinformation to spread on their platforms. This year, YouTube had to address advertisers' concerns after messages from major brands like AT&T were discovered on videos that promoted terrorism and hate speech.
The companies are moving quickly to address such issues, particularly as they seek to attract a greater portion of the money earmarked for television advertising to the video content on their sites.
Facebook has enabled hundreds of publishers and individuals to run ads during live video broadcasts in the past year, and the company recently introduced a slate of new shows on a part of its site called "Watch". If the new guidelines encourage people to post more G-rated video content, they are likely to bolster Facebook's pitch to advertisers.
"Facebook is this huge, huge, huge platform, and they haven't really been monetising original content in the same way as YouTube has," said Mr John Montgomery, executive vice-president for brand safety at GroupM, a media investment group for the advertising giant WPP. "What I think is different for Facebook is that this is a much earlier stage for them that they're going into this, and the scale is different in that there will be much, much less content uploaded than those stupefying numbers you hear about on YouTube."
YouTube has said 400 hours of video are added to the site every minute.
That should be an advantage in policing content, Mr Montgomery said, especially with the limits that Facebook is placing on who can make money from certain features.
NEW YORK TIMES