Meta's new policies promise to change the way content is moderated on its platforms. Learn what this means for businesses and its impact on removing fraud and scams.
On January 7, 2025, Meta CEO Mark Zuckerberg announced significant changes to the company’s Trust and Safety program, which oversees the enforcement of policies related to misinformation and other violations of community standards. These updates will directly impact Meta’s flagship platforms: Facebook, Instagram, and Threads—some of the most popular social networks.
With the updates, the company intends to "restore freedom of expression on the platforms," according to Zuckerberg's statement. According to the CEO, this will correct errors in the filters that were allegedly censoring content inappropriately.
Post by Mark Zuckerberg on Threads Highlights Meta’s Policy Changes.
Zuckerberg argues that this would free up resources to focus on filtering illegal content, citing criminal activities such as drugs, terrorism, and child exploitation. However, he also acknowledges that with these changes, the platform will "detect fewer harmful things," which is seen as an acceptable trade-off to reduce censorship.
But what exactly has changed in Meta's policies, and why could this matter to your business?
The removal of the fact-checking program is more closely tied to Zuckerberg's stance on freedom of expression, as fact-checking teams were responsible for filtering misinformation on the platform's posts.
However, cases involving the misuse of brands are different, raising questions about how these changes might affect the processes for removing malicious content, or takedowns.
By shifting the focus of filters to high-severity violations, Meta has included fraud and scams in this category. In his remarks, Meta's CEO stated: "We will continue directing these systems to combat illegal and high-severity violations, such as terrorism, child sexual exploitation, drugs, fraud, and scams."
Unauthorized use of brands can be classified as illegal activity, reinforcing the inclusion of cases such as fake profiles and ads impersonating brands under this new focus. In these scenarios, the most common type of complaint is an intellectual property violation claim, used to remove profiles that appropriate brand identifiers, such as names and logos, for personal gain.
The DMCA (Digital Millennium Copyright Act) is a U.S. copyright law that grants digital providers exemption from liability for copyright violations, provided they promptly remove infringing content. As a U.S.-based company, Meta is bound by rules that require immediate action in cases of valid and well-substantiated copyright infringement claims. In many cases involving fraudulent profiles and ads, protected commercials, words, and images are used to lend credibility to fake pages.
This makes it unlikely that the alleged bias of the California-based Trust and Safety team significantly influenced the removal of fake profiles impersonating brands or fraudulent ads. Therefore, no substantial changes in evaluation criteria for these cases are expected, regardless of the team's new location. However, it is unclear whether the teams will also be reduced, as occurred with X (formerly Twitter) after its acquisition by Elon Musk.
Since the details of the transition remain vague, the impact on takedown timelines or even the overall effectiveness of takedowns during this period remains uncertain.
The expectation is that the changes will have a more pronounced effect on the criteria for removing political content and hate speech. It remains to be seen whether the platform will actually fulfill its promise to redirect efforts toward the high-severity violations mentioned. On the other hand, the team relocation or other internal operational changes may cause potential disruptions in the quality of responses for takedowns of fake profiles and fraudulent ads.
The acquisition of Twitter by Elon Musk, finalized in October 2022, triggered a drastic restructuring of Trust and Safety teams. Estimates from various sources suggest cuts of 50% to 75%, including key members of moderation and security teams, reducing the platform's ability to conduct in-depth analysis and respond quickly. These changes were compounded by voluntary departures, including Ella Irwin, the team leader.
Content moderation policies were also revised, emphasizing "freedom of expression," and rules for account suspension or blocking were updated. Accounts previously banned for hate speech, spam, or systematic violations of prior policies were reinstated. Policies on "fake news" or misleading content were softened in some areas but maintained restrictions on sensitive topics, such as election misinformation and financial fraud.
Account verification (Twitter Blue) shifted to a paid subscription model, diminishing the link between verification and account authenticity.
In September 2024, Musk relocated X from California to Texas, where his other companies, Starlink and The Boring Company, are based. Texas prohibits the moderation of political opinion content on social networks and offers a highly business-friendly regulatory environment.
In February 2023, just months after Musk's acquisition, Axur observed significant spikes in uptime—the time between fraud notification and the actual removal by the platform. Some takedown requests also went unanswered.
Reporting procedures were adjusted, requiring the use of the specific Trademark channel instead of impersonation channels. This change meant that reports against profiles impersonating a brand were only accepted if the profiles were publicly active. Private messages or protected tweets could no longer serve as evidence in such cases.
By May 2023, uptimes had returned to near pre-restructuring levels. However, additional spikes were noted in September 2023 and January 2024, with stability returning in April 2024 and remaining consistent since.
Axur also observed a decline in the number of incidents starting in August 2023, possibly reflecting the platform's reduced overall audience. Despite these changes, the success rate of takedowns remained steady.
The updates do not change the guidelines for how businesses should report malicious content to Meta but emphasize the importance of following best practices to ensure efficient notifications. More than ever, platforms will prioritize reports submitted through the correct channels with accurate messaging. This expertise in reporting can be the key to successfully removing malicious content from social networks.
The first step is to leverage automation for detecting incidents involving brands or executive personas on Facebook and Instagram. Using Artificial Intelligence to identify risk attributes and classify threats is particularly beneficial for crafting the correct message. This can be done by internal information security teams or through external cybersecurity specialists like Axur.
At this moment, having access to sophisticated technology is a significant advantage in supporting the protection of your brand and your customers.
AI plays an increasingly vital role in takedown programs for two main reasons:
Another point to keep in mind is that changes in social platforms like those implemented by Meta, especially when influenced by political motivations, can lead some users to abandon these platforms in search of alternatives. In the United States, following Zuckerberg's announcements, Google searches for "how to delete Facebook account" surged by 5,000%.
It’s also crucial to monitor the rise of new platforms because cybercrime tends to follow where the audience migrates—a trend clearly seen with the growth of platforms like TikTok. Therefore, quickly adapting to monitor and address infractions, such as unauthorized brand use, and intervening through the correct takedown channels on emerging platforms is vital.
The use of AI to automate notification workflows, particularly on new platforms, provides a significant advantage. Over the past year, agentic AI technologies—offering essential adaptability to handle changes in notification forms or API modifications without requiring extensive coding—have advanced rapidly. Axur's platform already integrates agentic AI for notifications, making it the first (and currently the only) company in the world to leverage this technology for takedowns.
The changes in Meta's moderation policies indicate a clear shift in the platform's priorities. However, regarding fraud and scams, these activities are expected to remain classified as high-severity violations. Consequently, no significant changes in takedown processes for these cases are anticipated. Nevertheless, the relocation of the Trust and Safety team and the possibility of staff reductions, as seen with X, introduce uncertainties about the long-term implications of these actions.
Another critical consideration is the potential impact on Meta's user base. Significant changes in moderation policies, particularly those tied to debates around freedom of expression, have historically led to audience declines or migrations to alternative platforms. The rise of new networks, such as TikTok, highlights that cybercriminals follow where the audience goes, necessitating agile and adaptable monitoring strategies.
In this context, Artificial Intelligence emerges as an essential tool for businesses aiming to protect their brands in digital environments. Solutions that automate incident detection, expedite notifications, and adapt quickly to platform changes—such as those offered by Axur—become a strategic advantage.
If you want to learn more about the impact of Meta’s new policies or are dealing with challenges like fraud and fake profiles, we’re here to help. Get in touch to benefit from our specialized support.
What has changed in Meta’s moderation policies?
Meta has ended its fact-checking program, replacing it with a collaborative model called "Community Notes." Additionally, moderation will now focus exclusively on illegal or high-severity content, such as terrorism, child exploitation, fraud, and scams.
Do these changes impact the takedown process for fraudulent content?
Not directly. Fraud and scams remain classified as high-severity violations, keeping them a priority for removal. However, the relocation of the Trust and Safety team to Texas introduces uncertainties regarding response times and consistency.
What is Meta’s Trust and Safety program?
This is the department responsible for enforcing content moderation policies on Meta’s platforms, including Facebook, Instagram, and Threads. It ensures the removal of content that violates community standards and applicable laws.
How can companies ensure fraud is removed on Meta’s platforms?
Submitting well-supported notifications through the correct channels and providing clear evidence is essential. Automating the detection and notification process using AI-powered technologies, such as those offered by Axur, can also significantly enhance effectiveness.
What should businesses do in light of these changes?
Investing in technological solutions, like Axur’s platform, is critical. These tools automate and streamline the identification and notification of incidents, ensuring effective responses even in a constantly changing environment.