Meta Accused of Misrepresenting Moderation Performance and Violation Reporting
Meta, formerly known as Facebook, is facing accusations of misrepresenting the performance of its moderation teams and the accuracy of its violation reporting. A complaint filed on behalf of 33 states alleges that Meta’s Community Standards Enforcement Reports do not reflect the company’s internal data on violations. The reports tout low rates of community standards violations but exclude key data from user experience surveys that indicate higher rates of user encounters with harmful content.
For example, Meta claims that for every 10,000 content views on its platforms, only 10 or 11 would contain hate speech. However, an internal user survey from Meta revealed that an average of 19.3% of Instagram users and 17.6% of Facebook users reported witnessing hate speech or discrimination on the platforms. The complaint alleges that Meta knows about this discrepancy but presents alternative statistics publicly to reduce scrutiny and provide a false sense of safety in its apps.
The complaint also highlights that Meta has received over 1.1 million reports of users under the age of 13 accessing Instagram since early 2019, yet it has disabled only a fraction of those accounts. If Meta is found to be in violation of privacy laws, it could face significant fines and increased scrutiny regarding its protection and moderation measures, particularly in relation to younger users.
Meta has responded to the complaint, stating that it mischaracterizes its work by using selective quotes and cherry-picked documents. The outcome of the complaint could have a major impact on Meta’s business and may lead to the implementation of stricter regulations around young users and data access.
Potential Impact on Meta’s Business and Regulatory Environment
If the complaint leads to findings against Meta, it could result in more accurate insight into the actual rates of exposure and potential harm within Meta’s apps. This could also put the spotlight back on the effectiveness of content moderation and exposure on Meta’s platforms, potentially leading to the implementation of even tougher regulations for young users and data access.
Europe has already introduced the Digital Services Act (D.S.A.), which includes provisions to protect younger users, such as a ban on collecting personal data for advertising purposes. It remains to be seen if similar restrictions will be imposed in the United States as a result of this complaint. The complaint could provide further insight into Meta’s reporting and protection measures, shedding light on the company’s practices in relation to user safety and privacy.
With the increasing usage of social media platforms by young children, it is crucial for both companies and parents to ensure appropriate access and content. However, if the investigation reveals that Meta has knowingly allowed underage access to its platforms, it could lead to new complications and consequences for Meta and the broader social media sector.
Overall, the outcome of the complaint will determine the potential repercussions for Meta’s business and the regulatory landscape surrounding user safety and privacy on social media platforms.