But there is little evidence that the company is taking more aggressive action under his management or putting more resources toward the platform’s long-running problem with child sexual exploitation content, according to interviews with four former employees, one current employee, internal company records and interviews with people who work to stop child abuse content online.
Meanwhile, Musk has turned the topic of online safety into part of a larger effort to disparage Twitter’s previous leaders and portray his ownership of the company as part of a sociopolitical battle against “the woke mind virus,” as he calls center-left to far-left ideals. That shift comes as he has further embraced the kind of far-right online rhetoric that often also includes false claims of child sex abuse.
“It is a crime that they refused to take action on child exploitation for years!” Musk tweeted Friday in response to a resignation letter from a member of the company’s Trust and Safety Council who worked on child abuse issues.
“This is false,” former CEO Jack Dorsey responded.
Under Musk’s new management, Twitter said that its November account suspensions for child sexual exploitation content were higher than in any other month of 2022 thanks to new partnerships with unnamed organizations and new “detection and enforcement methods” the company did not describe.
Meanwhile, Twitter’s resources to fight child sexual exploitation content online (and what is sometimes called child pornography or child sexual abuse materials) are thin, following layoffs, mass-firings and resignations from the company.
While the personnel count is still shifting at Twitter, internal records obtained by NBC News and CNBC indicate that as of early December, approximately 25 employees held titles related to “Trust and Safety” out of roughly a total of 1,600 employees still at the company. The total includes more than 100 people who Musk has authorized to work at Twitter but who work at his other companies, Tesla, SpaceX and The Boring Co., along with assorted investors and advisers.
One former Twitter employee who worked on child safety said that they know of a “handful” of people at the company still working on the issue but that most or all of the product managers and engineers who were on the team are no longer there. The employee spoke on the condition of anonymity out of fear of retaliation for discussing the company.
Twitter headcount had ballooned to more than 7,500 employees by the end of 2021. Even if Musk had not taken over the business, layoffs were likely, former employees said.
Twitter did not respond to requests for comment.
Under Musk’s management, the company has also pulled back on some external commitments with child safety groups.
For example, Twitter disbanded its Trust and Safety Council on Monday, which included 12 groups that advised the company on its efforts to address child sexual exploitation.
The National Center for Missing & Exploited Children (NCMEC), the organization tasked by the U.S. government with tracking reports of child sexual abuse material online, said little has changed under Musk’s leadership in terms of Twitter’s reporting practices so far.
“Despite the rhetoric and some of what we’ve seen people posting online, their CyberTipline numbers are almost identical to what they were prior to Musk coming on board,” said NCMEC representative Gavin Portnoy, referring to the organization’s centralized CSAM reporting system.
Portnoy noted that one change the group did notice was that Twitter didn’t send a representative to the organization’s annual social media roundtable.
“The previous person was one of the folks who resigned,” Portnoy said. When asked if Twitter wanted to send a proxy, Portnoy said that Twitter declined.
More recently, Musk has used the issue of child sexual exploitation content to attack former Twitter employees, most notably Yoel Roth, who took over as head of the company’s trust and safety efforts and was lauded by Musk when he first became CEO in October. Roth left Twitter a few weeks later, following the U.S. midterm elections.
Musk suggested that Roth’s doctoral thesis about the LGBTQ dating app Grindr advocated for child sexualization when the opposite was true. Roth, who is openly gay, specifically stated that Grindr was not a safe space for minors and discussed how to create age-appropriate content and spaces for LGBTQ teens to connect online and through apps.
Musk’s misleading claims left Roth facing widespread online abuse, including on Twitter, where users hurled homophobic and antisemitic threats, memes and slurs at him. On Monday, Roth reportedly moved out of his home due to threats following Musk’s baseless accusations about him.
Musk’s incendiary tweets targeting Roth fit into a rising strain of far-right attacks on LGBTQ people using false allegations of “grooming,” said Laurel Powell, deputy director of communications for programs at Human Rights Campaign, an LGBTQ advocacy nonprofit.
“This grooming rhetoric is really in a lot of cases just recycled hate speech against LGBTQ+ people,” Powell said. “This is a really dangerous moment that we’re in — that someone with as large of a platform as Mr. Musk is feeding into this false disproven rhetoric.”
Twitter’s imperfect efforts fighting child sexual exploitation content were well documented. In 2013, the company said it would introduce PhotoDNA technology to prevent the posting of child sexual abuse material (CSAM) already present in a known CSAM database. That technology cannot detect newly created material, however.
The company’s transparency reports, which detail things like legal requests and account takedowns, showed that the company removed more than 1 million accounts in 2021 for violations of its child sexual exploitation content rules.
In 2021, Twitter reported 86,666 instances of CSAM detected on its platform, a number Portnoy said could be higher. “We’ve always felt that there should have been more reports coming out of Twitter, no matter how you cut it, and just given the sheer number of users that are there,” he said.
Child sexual exploitation content has remained a problem for Twitter, though most major social media platforms continue to deal with it in some form or another.
Some advertisers left Twitter earlier this year after ads were found to have appeared alongside problematic content. Twitter was sued in 2021 by a child sex abuse victim and their mother who alleged the company did not take action swiftly enough when alerted to a video of the child circulating on the platform. A second child was later added to the lawsuit, which is currently before the Ninth Circuit Court of Appeals.
Moderation of this content usually relies on a combination of automated detection systems and specialized internal teams and external contractors to identify child abuse content and remove it. Twitter’s policy defines this content as “imagery and videos, referred to as child pornography, but also written solicitations and other material that promotes child sexual exploitation.”
According to people familiar with the situation and the internal records, the layoffs, firings and resignations have cut the number of engineers at Twitter by more than half, including myriad employees and leaders in the company who worked on trust and safety features and improvements to the existing platform. Musk has also cut contractors, while the company looks to high-tech automation for its moderation needs, Twitter’s current head of Trust and Safety, Ella Irwin, told Reuters.
“You tend to think more bodies equals more safety,” Portnoy said. “So, I mean, that is disheartening.”
It’s unclear how many Twitter employees remain to work on child safety issues.
A search on LinkedIn for current Twitter employees who say they work on child safety only turned up a few accounts. Bloomberg and Wired each previously reported that Musk-directed layoffs and terminations at Twitter had reduced the number of people working there on content moderation, including with a focus on child sexual exploitation content.
Despite this, Musk has maintained that he is reorienting the company in a way that will prioritize child safety.
Twitter has turned to at least one outside researcher for help — Andrea Stroppa, an Italian cybersecurity researcher who says he is friendly with Elon Musk, and who has praised Musk online since his Twitter takeover.
Stroppa previously analyzed bots and propaganda on Twitter, and told NBC News he is now working with employees at the company to find and take down child sexual exploitation content and accounts. Twitter’s current head of Trust and Safety, Irwin, has thanked Stroppa for his “partnership and dedication.”
Stroppa said he felt Twitter’s previous efforts were lacking and that it now moves quickly to find and suspend accounts that post child sexual exploitation content. He said the company has also changed its policy from removing individual tweets to immediately banning accounts found to have violated its policies.
“I think it’s a radical change,” he said in a phone interview.
Marijke Chartouni, who experienced human trafficking and abuse and now works to bring awareness to the issue, said that she previously attained good results by flagging problematic accounts and content to Twitter starting in 2020.
The platform wasn’t perfect, but it wasn’t neglectful as Musk claimed. “‘Old’ Twitter was quick to respond and took down the accounts,” she said in an email. “I felt like I was making some progress.”