Meta plans to automate many of its product risk assessments.


According to internal documents, systems with AI can quickly take responsibility for assessing the potential harm and privacy risks of up to 90% of updates made to meta apps such as Instagram and WhatsApp. reportedly viewed by NPR.

NPR says 2012 contract Between Facebook (current meta) and the Federal Trade Commission, we require that the company conduct a privacy review of its products and assess the risk of potential updates. Up until now, these reviews have been conducted primarily by human raters.

The new system reportedly requires the product team to fill out questions about their work, and usually the AI ​​receives an “immediate decision” with identified risks and the requirements that the update or feature must meet before launch.

This AI-centric approach allows META to update its products more quickly, but one former executive also creates “higher risk” in NPR.

In the statement, Meta appeared to confirm that it was changing its review system, but insisted that only “low-risk decisions” would be automated and that “human expertise” would be used to investigate “new and complex problems.”

Leave a Reply

Your email address will not be published. Required fields are marked *