Social media companies cut hundreds of content moderation jobs over the continuing wave of tech layoffsstoking fears among industry workers and online safety advocates that major platforms are less able to tackle abuse than they were just months ago.
Tech companies have announced more than 101,000 job cuts this year alone, in addition to the nearly 160,000 during 2022, according to tracker Layoffs.fyi. Among the wide range of job functions affected by these cuts are the “trust and safety” teams – the units within the main platform operators and the contractor companies they hire that enforce content policies and fight against speech. hate and misinformation.
Earlier this month, Alphabet reportedly downsize Jigsawa Google unit that creates content moderation tools and describes himself as tracking “threats to public societies,” such as civilian oversight, from at least a third over the past few weeks. Meta’s main contractor for content moderation in Africa said in January that it was removal of 200 employees as he moved away from content review services. In November, the mass layoffs of Twitter affected many staff responsible for limiting prohibited content such as hate speech and targeted harassment, and the company dissolved its Trust and Security Council the next month.
Postings on Indeed with “trust and safety” in their job titles were down 70% last month compared to January 2022 among employers across all industries, the job board told NBC News. While tech recruiting in particular has retreated across the board as the industry shrinks from its pandemic hiring frenzy, advocates said the global need for content moderation remains acute.
“Markets go up and down, but the need for trust and safety practices is constant or, where appropriate, increases over time,” said Charlotte Willner, executive director of the Trust & Safety Professional Association, a global organization for workers who develop and enforce digital platform policies regarding online behavior.
A Twitter employee who still works on the company’s trust and safety operations and asked not to be identified for fear of retaliation described feeling worried and overwhelmed since the department’s cuts last fall.
“We were already underrepresented globally. The United States had a lot more staff than outside the United States,” the employee said. “In places like India, which are really marked by complicated religious and ethnic divisions, this hateful and potentially violent conduct has really increased. Fewer people means less work being done in many different spaces.
Twitter accounts offering to trade or sell child sexual abuse material stayed on the platform for months after CEO Elon Musk promised in November to crack down on child exploitation, NBC News reported in January. “We certainly know we still have work to do in space, and we certainly think we’ve improved rapidly,” Twitter said at the time in response to the findings.
A representative for Alphabet had no comment. Twitter did not respond to requests for comment.
A Meta spokesperson said the company “respects[s] Sama’s decision to quit the content review services she provides to social media platforms. We are working with our partners during this transition to ensure there is no impact to our ability to review content. Meta has more than 40,000 people “working on safety and security,” including 15,000 content reviewers, the spokesperson said.
Worries about trust and security cuts coincide with Washington’s growing interest in tighter regulation of Big Tech on multiple fronts.
In his State of the Union Address On Tuesday, President Biden urged Congress to “pass bipartisan legislation to strengthen antitrust enforcement and prevent major online platforms from giving their own products an unfair advantage,” and to “impose tougher limits on personal data that companies collect about all of us”. Biden and lawmakers from both parties have also signaled their openness to reforming Section 230, a measure that has long shielded tech companies from liability for speech and activity on their platforms.
“Various governments are looking to force big tech companies and social media platforms [to become more] responsible for ‘harmful’ content,” said Alan Woodward, cybersecurity expert and professor at the University of Surrey in the UK.
In addition to exposing tech companies to greater regulatory risk, any setback in content moderation “should be of concern to everyone,” he said. “It’s not just about weeding out inappropriate child abuse material, but covers subtle areas of misinformation that we know are meant to influence our democracy.”