How Meta trains technology

UPDATED

JAN 19, 2022

Sometimes, the meaning of a piece of content is immediately obvious to a person but less clear to technology. To keep people safe, Meta needs to train artificial intelligence on how to detect violating posts.

For example, the following content combines text and images. Two of the images are good-natured; the other 2 are potentially mean-spirited.

Without proper training, most AI struggles to make these distinctions. It either reads the text and determines the literal meaning of the words, or it looks at the image to determine the general meaning of the photo’s subject. People, on the other hand, instinctively pair the text and image together to understand the content.

One way we address this is by training our technology to first look at all the components of a post and only then to determine the true meaning. This can go a long way to helping AI more accurately detect what a person sees when viewing the same post.

We also use a system that guides AI to learn directly from millions of current pieces of content and help pick training data that reflects our goals. This is different from typical AI systems that rely on fixed data for training. Using this method helps us better protect people from hate speech and content that incites violence.

We still have work to do, but this training will help our technology continue to improve and better understand the true meaning of multimodal content.