Fb: Hate speech accounts for 10 or 11 out of 10,000 content material seen

Guy Rosen, vice president of integrity, said in a newsroom post Thursday that Facebook's investment in artificial intelligence has helped proactively identify hate speech on its platform before users report it.

Rosen said during a press conference Thursday, "Think of prevalence like an air quality test to determine the percentage of pollutants."

Facebook Integrity Product Manager Arcadiy Kantor elaborated on the subject in a separate newsroom post, explaining that the social network calculates prevalence by selecting a sample of content that has been seen on Facebook and labeling the extent to which that content is Violate its hate speech guidelines.

To take into account the linguistic and cultural context, these examples are sent to content reviewers in different languages ​​and regions.

Pointing out that the frequency with which content is viewed is not evenly distributed, Kantor wrote, “One piece of content could go viral and be seen by many people in a very short time, while other content on the Internet might be available to one seen for a long time and only by a handful of people. "

He also explained the challenges in determining hate speech, writing, “We define hate speech as anything that directly attacks people based on protected traits such as race, ethnicity, national origin, religious affiliation, sexual orientation, gender, gender, gender identity or severe disability or illness, ”but added:“ The language is evolving, and a word that wasn't an arc yesterday can become one tomorrow. This means that content enforcement is a delicate balance between making sure we don't miss any hate speech and removing other forms of legal language. "

Facebook continues to use a combination of user reports and AI to detect hate speech on Facebook and Instagram, and Kantor addressed the challenges the company is facing with the human part of this equation, such as: B. People from areas with lower digital literacy who are not aware of it can report content, or people who report content that they do not like but that does not violate Facebook's guidelines, e.g. B. Spoilers from TV shows or posts about competing sports teams.

Regarding the AI, he wrote: “When we started reporting our hate speech metrics in the fourth quarter of 2017, our proactive detection rate was 23.6%. This means that of the hate speech that we removed, 23.6% was found before a user reported it to us. The remaining majority were removed after a user reported this. Today we proactively recognize about 95% of the hate speech that we remove. Whether content is being proactively detected or reported by users, we often use AI to take action on simple cases and prioritize the more nuanced cases where context needs to be considered for our reviewers. "

Facebook

These content moderators may be different, however, as an open letter sent Wednesday by over 200 of them blew up Facebook's AI systems saying, "Management told moderators that we have certain types of toxic content in the verification tool of should no longer see what we are working – such as graphic violence or child abuse. The AI ​​wasn't up to the job. Important language was torn into the mouth of the Facebook filter – and risky content such as self-harm stayed up. The lesson is clear. Facebook's algorithms are years away from the level of sophistication required to automatically moderate content. You may never get there. "

Continue reading

Comments are closed.