Main Highlights
- Facebook claims it has spent more than $13 billion on “safety and security” since 2016 and employs 40,000 people in the field.
- The firm cited the figures as an example of how it has dealt with difficulties on Facebook and Instagram.
- While the business has generally been quick to respond to concerns on the platform, it is attempting to be more proactive by embedding safety and security workers in product teams during the development process.
- According to the Journal, Facebook was aware of major issues such as misleading COVID-19 information and harmful emotional consequences on users but delayed correcting them out of concern of undermining user engagement with its platforms.
Facebook claims it has spent more than $13 billion on “safety and security” since 2016 and employs 40,000 people in the field, providing a look inside its operations following a week of revelations reported by The Wall Street Journal. The firm cited the figures as an example of how it has dealt with difficulties on Facebook and Instagram, claiming that the Journal’s reports missed “essential context” regarding complicated topics.
The new statistics, which are meant to demonstrate how seriously the company takes safety and security issues, were published in a blog post on Tuesday, following a series of stories last week in the Wall Street Journal that used leaked documents to show that, despite significant investments, Facebook struggles to combat a variety of serious issues, including Covid-19 misinformation and illegal human trafficking.
Facebook’s major concerns
According to the papers, Facebook’s own researchers frequently highlighted major concerns with improper material or user behavior on the company’s services, but Facebook failed to correct them on a regular basis. The revelations prompted legislators in the United States to demand an inquiry and potentially hearings on the problems.
The blog post addressed some of these issues without directly mentioning the newspaper’s reports. While the business has generally been quick to respond to concerns on the platform, it is attempting to be more proactive by embedding safety and security workers in product teams during the development process.
“In the past, we did not address safety and security problems early enough in the product development process,” Facebook wrote today in a blog post. “However, we have radically altered that strategy. Today, we incorporate safety and security teams directly into product development teams, allowing us to address these concerns throughout the product development process rather than afterward.”
Misleading Information
According to the Journal, Facebook was aware of major issues such as misleading COVID-19 information and harmful emotional consequences on users but delayed correcting them out of concern of undermining user engagement with its platforms. Over the weekend, Facebook executive Nick Clegg published a reply, accusing the publication of “deliberate mischaracterizations of what we are attempting to do.”
Much of Facebook’s new response casts the Journal’s allegations in a more positive light, focusing on how it eventually responded, such as removing 20 million pieces of false COVID-19 and vaccination content and blocking 3 billion fake accounts in the first half of 2021, rather than whether it acted too late.
If the new safety and security data are correct, they provide an update on some prior Facebook figures. Since 2017, when it employed 10,000 employees and pledged to increase the number within a year, the firm appears to have quadrupled its personnel in that sector. (This amount includes the services of outside contractors.)
Meanwhile, the $13 billion figure backs up a 2018 promise that Facebook will invest “billions of dollars each year” to safety and security, amounting to more than $2 billion per year on average. Facebook made high-profile pledges about increasing investment in 2017 and 2018, which saw criticism over Facebook’s participation in political misinformation efforts and the Cambridge Analytica privacy scandal — thus it’s probable that the majority of the expenditure came in recent years.
In the first half of this year, Facebook stated that its artificial intelligence technology assisted in the blocking of 3 billion fraudulent accounts. In addition, the business deleted almost 20 million pieces of bogus COVID-19 and vaccination material.
According to the business, it currently eliminates 15 times more information that violates its hate speech guidelines across Facebook and its image-sharing site Instagram than it did when it first began reporting it in 2017.