Here, the term “algorithm” refers to all of the computer coding — jealously guarded by the company — that specifies that such content will end up feeding information to one person, but not another. Thanks to the algorithm, the Facebook user is supposed to find themselves, little by little, with more and more content that triggers emotions in them and encourages them to return to the platform as often as possible or stay there for as long as possible. .
However, what critics have been claiming for years is that it doesn’t matter whether that content is fake news, or even hostile or hateful messages: what matters is the “engagement” it generates – like sharing and commenting.
two non-profit organizations, Global Witness and Fox GlovesSo, it tested Facebook’s algorithm by twice creating fake ads containing messages that “strip” ethnic minorities in Ethiopia and Myanmar and include calls for murder. They chose these two countries because Both are named in documents Whistleblower Frances Hogan Last fall: two countries in which Facebook allegedly contributed to the spread of hate speech against minorities.
The ads were not posted by the two organizations (they were due to be scheduled in the future), but were approved by the Facebook system. When notified of the situation, the company admitted that the ads should not be approved.
“We picked the worst case we could imagine,” For the Associated Press Rosie Sharp, from Global Witness. “Things that should be easier for Facebook to detect. They weren’t in code language… They were outright statements that this kind of person is not human.”
In the wake of Francis Hogan’s discoveries, the platform has always refused to say how many people it employs to do moderation in languages other than English, or in countries where English is not the first language.