Israel has accused Facebook of not doing enough to curb online content that incites violence against the state, with Public Security Minister Gilad Erdan describing the social network as a “monster” during a television interview on Saturday. Facebook defended its moderation policy in a statement to Reuters on Sunday, saying that it works closely with Israel to remove hateful or abusive content. The company did not directly address Erdan’s comments.
Israel has seen a wave of street attacks carried out by Palestinians in recent months. Since October, 34 Israelis and two Americans have been killed in Palestinian street attacks, while Israeli forces have killed at least 201 Palestinians, according to Reuters. (Israel says that 137 of those killed were attackers.) The Israeli government says that much of the violence has been encouraged on Facebook, and has called on the site to more proactively police hateful content. A 19-year-old Palestinian who killed an Israeli girl last week praised terrorists on Facebook prior to the attack, the Israeli newspaper Haaretz reports, and expressed his desire to die as a martyr.
In an interview on Israel’s Channel 2 Saturday, Erdan accused Facebook of “sabotaging” police efforts to curb the violence by not cooperating in investigations in the West Bank, adding that the site has “a very high bar for removing inciteful content and posts. Facebook today, which brought an amazing, positive revolution to the world, sadly, we see this since the rise of Daesh and the wave of terror, it has simply become a monster,” Erdan said, using an Arab term for ISIS. He also called on Israelis to “flood” Facebook CEO Mark Zuckerberg “in every possible place with the demand to monitor the platform he established and from which he earns billions.”
In its statement to Reuters, Facebook encouraged users to report content that violates its community standards, so it can “can examine each case and take quick action. We work regularly with safety organizations and policymakers around the world, including Israel, to ensure that people know how to make safe use of Facebook,” the company said. “There is no room for content that promotes violence, direct threats, terrorist or hate speeches on our platform.”
Facebook, Twitter, and other major tech companies have come under increased pressure to crack down on hate speech and terrorist propaganda, following attacks in Europe and the US. In May, Facebook, Twitter, Google, and Microsoft agreed to an EU “code of conduct” that obliges the companies to review and remove hateful content within 24 hours, and to promote “independent counter-narratives” to propaganda. Last month, Reuters reported that major tech companies are exploring systems that would automatically identify and remove hateful content published online.