“Meta needs to explain why its algorithm treats genuine Holocaust history with suspicion.”
Too Late Now
Earlier this month, Facebook’s algorithm flagged 21 posts from the Auschwitz Museum as going against its community standards — and now its parent company Meta is eating crow.
In a Facebook post, the Poland-based memorial said that after Meta’s content moderation algorithm moved some of its posts down in the feed over strange claims of violating community standards, the company has apologized, though not directly, for the error.
“We mistakenly sent notices to the Auschwitz Museum that several pieces of content the museum posted had been demoted,” a Meta spokesperson told The Telegraph. “In fact, that content does not violate our policies and was never actually demoted. We offer our sincere apologies for the error.”
In an April 12 post, the museum announced the erroneous flags and charged the social network with “algorithmic erasure of history.”
“The posts, which serve as tributes to individual victims of Auschwitz, have been unjustly targeted by this platform’s content moderation system and ‘moved lower in the Feed’, citing absurd reasons such as ‘Adult Nudity and Sexual Activity,’ ‘Bullying and Harassment,’ ‘Hate Speech,’ and ‘Violence Incitement,'” the post reads.
Indeed, as screenshots show, none of the posts in question had any such content, instead showing portraits of Auschwitz victims and short descriptions of their lives and identities prior to their murders at the hands of Nazis.
Common Problem
While the flags have since been rescinded, many are sounding the alarm about how this kind of AI-powered system cuts humans out of the curation of important messages.
.Shortly after the museum revealed the flags, Polish digital affairs minister Krzysztof Gawkowski trashed the site for such an egregious mistake, calling it a “scandal and an illustration of problems with automatic content moderation” in a translation of his post on X-formerly-Twitter.
Gawkowski’s demand for Meta to further explain itself was echoed by the Campaign Against Anti-Semitism, which said in a statement to The Telegraph that the company’s apology didn’t go far enough.
“Meta needs to explain why its algorithm treats genuine Holocaust history with suspicion,” the representative told the British newspaper, “and what it can do to ensure that these stories continue to get told and shared.”
It’s bad enough that Meta’s content algorithm flagged such important historic information as problematic — but within the context of its other major AI moderation issues, which includes auto-translating “Palestinian” to “terrorist” and allegedly promoting pedophilic content, this is a particular affront.
More on Meta AI: Meta’s AI Is Telling Users It Has a Child