What thousands of leaked documents show is damaging to the company’s image.
Facebook is going through its worst crisis since the Cambridge Analytica scandal when an insider accused the company of putting “profit before safety” and sheds light on internal machinations through thousands of memos.
The documents were handed over to US regulators and Congress in the form drafted by the legal advisor to Francis Hogan, the person who leaked the information.
Facebook is often accused of failing to moderate hate speech on its English-language pages, but the problem is getting worse in places where other languages are spoken, even after it promised to invest more after being accused of facilitating the Myanmar genocide in 2017.
A document written in 2021 warned of the low number of moderators of content in the Arabic dialects spoken in Saudi Arabia, Yemen and Libya. Another study from Afghanistan, where there are 5 million Facebook users, found that even pages explaining how to report hate speech were mistranslated.
These shortcomings occurred even though Facebook’s own investigation has placed some of these countries as “high risk” due to their political fragility and abundance of hate speech.
According to a document, the company has allocated 87% of its budget to developing disinformation detection algorithms in the US in 2020 and only 13% for the rest of the world.
Facebook has long said its AI programs can detect and remove hateful and abusive language, but these documents show its limitations. According to a March 2021 memo written by a group of researchers, the company is taking action against only 3-5% of hate speech and only 0.6% of violent content. Another note notes that you may never be able to go above 10-20% because it is “extremely difficult” for an AI to understand the context in which the language is used.
However, Facebook has already decided to rely more on AI and cut the money it spends on human moderation in 2019 when it comes to hate speech.