l
l
blogger better. Powered by Blogger.

Search

Labels

blogger better

Followers

Blog Archive

Total Pageviews

Labels

Download

Blogroll

Wednesday, April 8, 2020

Lacking eyeballs, Facebook’s ad review system fails to spot coronavirus harm

Facebook’s ad review system is failing to prevent coronavirus misinformation from being targeted at its users, according to an investigation by Consumer Reports.

The not-for-profit consumer advocacy organization set out to test Facebook’s system by setting up a page for a made-up organization, called the Self Preservation Society, and creating ads that contained false or deliberately misleading information about the coronavirus — including messaging that claimed (incorrectly) that people under 30 are “safe”, or that coronavirus is a “HOAX”.

Another of the bogus ads urged people to “stay healthy with SMALL daily doses” of bleach, per the report.

The upshot of the experiment? Facebook’s system waived all the ads through, apparently failing to spot any problems or potential harms. “Facebook approved them all,” writes Consumer Reports. “The advertisements remained scheduled for publication for more than a week without being flagged by Facebook.”

Of course the organization pulled the ads before they were published, saying it made certain no Facebook users were exposed to the false or misleading claims. But the test appears to expose how few barriers there are within Facebook’s current ad review system for picking up and preventing harmful ads targeting the coronavirus pandemic.

The only ad in the experiment Facebook rejected was flagged because of its image, per Consumer Reports — which says it had used a stock shot of a respirator-style face mask. After swapping the image for a “similar alternative” it says Facebook approved that too.

Last month, as part of its own business response to the threat posed by COVID-19, Facebook announced it was sending home all global content reviewers “until further notice” — saying it would be relying on more automated review as a consequence of this decision.

“As we rely more on our automated systems, we may make mistakes,” it wrote then.

Consumer Reports’ investigation highlights how serious those mistakes can be, as a result of Facebook’s decision to lean so heavily on AI moderation — given the company is waiving through clearly harmful messages that urge users to ignore public health advice to stay home and socially distance themselves, or even drink a harmful substance to stay “safe”.

In response to the Consumer Reports investigation Facebook defended itself — saying it has removed “millions” of listings for policy violations related to the coronavirus. Though it also conceded its enforcement around COVID-19 misinformation is far from perfect.

“While we’ve removed millions of ads and commerce listings for violating our policies related to COVID-19, we’re always working to improve our enforcement systems to prevent harmful misinformation related to this emergency from spreading on our services,” a Facebook spokesperson, Devon Kearns, told Consumer Reports.

A Facebook spokeswoman declined to specify how many humans it has working on ad review during the coronavirus crisis when we asked. Though the company told Consumer Reports it has a “few thousand” reviewers now able to work from home.

Back in 2018 Facebook reported having some 15,000 people employed doing content review.

It’s never been clear what proportion of those are focused on (user) content review vs ad review. But a “few thousand” vs 15k suggests there has likely been a very considerable drop in the number of eyeballs checking ads. (Pre-COVID, Facebook also liked to refer to having a safety and security team of over 35,000 people globally — with the 15k reviewers sitting within that.)

Facebook’s content review team has clearly shrunk considerably as a result of coronavirus-related disruption to its business. Though the company is refusing to come clean on exactly how many (few) people it has doing content review right now.

It’s also clear that the risk of harm from tools like Facebook’s ad platform — that can be used to easily and cheaply amplify damaging online disinformation — could hardly be higher than during a pandemic, when there is a pressing need for governments and health authorities to be able to communicate facts, official guidance and best practice to their populations to keep them safe.

Facebook’s platform becoming a conduit for false and/or maliciously misleading messaging risks undermining public health at a critical time.

Last month the company was also revealed to have blocked links to legitimate news and other websites that were sharing coronavirus-related content — following its switch to AI-led moderation.

While, in recent weeks, the company has faced criticism for failing to live up to a pledge to take down ads for coronavirus masks.

At the same time, Facebook’s platform remains a hotbed of user generated coronavirus-related misinformation — with individuals widely reported sharing posts that claim bogus home remedies such as gargling with salt water to kill the virus (it doesn’t) or playing down the seriousness of the COVID-19 pandemic by claiming it’s ‘just the flu’ (it’s not).



from Social – TechCrunch https://ift.tt/39TsSCL Lacking eyeballs, Facebook’s ad review system fails to spot coronavirus harm Natasha Lomas https://ift.tt/3c39TGu
via IFTTT

0 comments

Post a Comment

blogger better Headline Animator