
Meta reportedly plans to transform the task of assessing its products’ potential harms away from human reviewers.
According to a report by NPR, Meta is aiming to have up to 90% of risk assessments fall on artificial intelligence (AI) and is considering using AI reviews even in areas such as youth risk and "integrity," which covers violent content, misinformation and more.
What to expect from Meta?
Updates and new features for Meta's platforms, including Instagram and WhatsApp, have long been subjected to human reviews before they hit the public, but Meta has reportedly doubled down on the use of AI over the last two months.
This AI-centric approach would let Meta update its products more rapidly, but one former executive stated that it also creates “higher risks,” as “negative externalities of product changes are less likely to be prevented before they start causing problems in the world.”
Speaking to NPR, Meta said it would still tap "human expertise" to evaluate "novel and complex issues," and leave the "low-risk decisions" to AI.
“As risks evolve and our program matures, we enhance our processes to identify risks better, streamline decision-making, and improve people’s experience,” the spokesperson said.
“We leverage technology to add consistency and predictability to low-risk decisions and rely on human expertise for rigorous assessments and oversight of novel or complex issues,” he added.
It comes a few days after Meta released its latest quarterly integrity reports — the first since changing its policies on content moderation and fact-checking.