Meta is expanding tests of facial recognition to combat celebrity scam ads and improve its anti-scam measures. Monika Bickert, Meta’s VP of content policy, announced the news Monday, stating that the tests aim to make it harder for fraudsters to trick Facebook and Instagram users.
Scammers often use public figures’ images to bait people into engaging with ads that lead to scam websites. This violates policies and harms users. The tests use facial recognition as a back-stop to check ads flagged as suspect when they contain images of public figures at risk of “celeb-bait.”
The company claims the feature is solely for fighting scam ads and deletes facial data generated from ads after the comparison. Early tests have shown promising results in detecting and enforcing against these scams.
Meta is also considering using facial recognition to detect deepfake scam ads. The social media company has been criticized for failing to stop scammers from using famous people’s faces to promote scams. The use of facial recognition is timely as Meta aims to collect user data to train AI models.

In the coming weeks, Meta will notify public figures enrolled in the program about celeb-bait and allow them to opt-out. The company is also testing facial recognition to spot celebrity imposter accounts and using video selfies for faster account unlocking after scams.
The facial recognition-based video selfie identification method will require users to upload a video selfie, processed using facial recognition technology. Meta claims the method is secure, similar to phone unlocking, and will not be visible to others.
All tests of facial recognition are global, except in the U.K. and EU due to strict data protection regulations. Meta’s tests may contribute to their digital identity offerings if enough users opt-in.
Meta is engaging with regulators and policymakers in the U.K. while continuing testing. While use of facial recognition for security purposes may be acceptable, using data to train commercial AI models raises privacy concerns.