Enlarge /. Content moderators work in a Facebook office in Austin, Texas.
Many jobs can result in burnout for employees, but the effect of dealing with the absolute worst cruelty that humanity has to offer 40 hours a week can go far beyond burnout and burden employees with severe psychological trauma. Facebook has now resigned itself to a group of content moderators who sued the tech giants and claimed that their jobs had burdened them with a serious post-traumatic stress disorder. The company has done nothing to mitigate or prevent them.
The company is paying $ 52 million to settle the lawsuit, which was first filed in 2018 by a content moderator named Selena Scola. Scola's lawsuit claimed that she had developed a "debilitating" PTSD after seeing "thousands of acts of extreme and graphic violence".
The conditions under which Facebook moderators often work have been extensively reported by The Guardian, The Verge (more than once), The Washington Post and BuzzFeed News. Moderators, who mainly work for third-party contractors, told reporters hours and hours of dealing with graphic murders, cruelty to animals, sexual abuse, child abuse, and other horrific footage while receiving little or no managerial or mental health support and harsh goods – to meet quotas under relocation guidelines.
"We are delighted that Facebook has worked with us to develop an unprecedented program that can help people do work that was unimaginable a few years ago," said Steve Williams, an attorney who said Plaintiff represents, in a written statement. "The damage that this work can cause is real and serious."
More than 11,000 current and former content moderators working in Arizona, California, Florida and Texas will receive at least $ 1,000 from the comparison. Employees who have been officially diagnosed with mental illness such as PTSD or depression may receive an additional $ 1,500 per diagnosis to cover the cost of treatment, up to a total of $ 6,000 per employee. People with qualified diagnoses may also be able to prove other injuries that they have suffered as a result of their work for Facebook and receive additional compensation.
As with any class action, the amount that a person can receive can be significantly reduced if the majority of the class makes an application and is entitled to benefits.
Facebook said in its statement that it was "grateful to the people who are doing this important work," adding, "We are determined to provide them with additional support through this agreement and in the future."
Automation of the solution
Facebook's ultimate goal was to automate content moderation as much as possible. This way, fewer contractors have to be paid to inspect malicious content, and those who are still doing the work don't have to see as much of it.
The company's latest community standards enforcement report, released yesterday, shows that automated tools are actually getting better, even if they still have a long way to go.
About 90 percent of the hate speech Facebook removed in the last quarter was automatically recognized before it was checked by anyone. In 2018, that number was around 24 percent, said Mark Zuckerberg, CEO of the company, in a phone call to the media, "and around zero percent the year before." Overall, the company deleted approximately 9.6 million cases of hate speech in the first quarter, compared to approximately 5.7 million in the previous reporting period, and removed or specifically related to approximately 4.7 million posts from hate groups.
Like any other platform currently available, Facebook also has to deal with COVID-19 misinformation that grows quickly when you have more than 2.6 billion users. In April, Facebook's fact-checkers attached "around 50 million labels to COVID-19 content based on 7,500 articles," said Zuckerberg. Apparently these labels work; In about 95 percent of cases, viewers don't click through to content that has been warned as wrong.