Facebook’s parent Meta uses AI to fight new types of harmful content
Meta, formerly known as Facebook, said on Wednesday it had created artificial intelligence technology that can adapt more quickly to new types of harmful content, including posts discouraging COVID-19 vaccinations.
Typically, AI systems learn new tasks from examples, but the process of collecting and labeling a massive amount of data typically takes months. Using the Meta technology that calls it Few-Shot Learner, the new AI system only needs a small amount of training data so that it can adapt to combat new types of harmful content by a few weeks instead of a few months.
The social network, for example, has rules against posting harmful false information about the COVID-19 vaccine, including false claims that the vaccine damages DNA. But sometimes users phrase their comments in the form of a question like “Vaccine or DNA changer?” Or even use code words to try to evade detection. The new technology, according to Meta, will help the company capture content it may be missing.
âIf we react faster then we can initiate interventions and content moderations faster,â Cornelia Carapcea, Meta Product Manager, said in an interview. âUltimately, the goal here is to keep users safe.â
The creation of this new AI system could help the social network fend off criticism, especially from President Joe Biden, that it is not doing enough to tackle misinformation on its platform such as misinformation about the COVID vaccine. -19. Former Facebook product manager turned whistleblower Frances Haugen and advocacy groups have also accused the company of prioritizing its profits over user safety, especially in developing countries.
Meta said it has tested the new system and is able to identify offensive content that conventional AI systems might not detect. After the new system rolled out to Facebook and its Instagram photo service, users’ percentage of views of harmful content declined, Meta said. Few-Shot Learner works in over 100 languages. The company hasn’t listed which languages ââare included, but Carapcea said the new technology may “make a big dent” in tackling harmful content in languages ââother than English, which may have fewer samples for train AI systems.
As Facebook focuses more on building the metaverse, virtual spaces in which people can socialize and work, moderation of content will become more complex. Carapcea said she believes Few-Shot Learner could potentially be applied to virtual reality content.
âAt the end of the day, Few-Shot Learner is technology used specifically for integrity,â she said. “But teaching machine learning systems with fewer and fewer examples is a topic that is at the forefront of research.”