Brand Abuse, Data Leakage

2.2 Billion Fake Accounts Removed: Facebook Rules

By Andre Luiz R. Silva on

Yes, that’s right: In just the first quarter of 2019, Facebook removed 2.2 billion fake accounts. The number nearly doubled compared to the last quarter of 2018 (when 1.2 billion accounts were removed), which means that Zuckerberg’s social network is taking increasingly tough measures against these infractions.


How (and why) does Facebook remove so many fake accounts?

The basis of any removal action Facebook carries out on its platform is, obviously, noncompliance with its existing rules. You may have skipped the instructions without reading a word when you signed up, but the rules exist and are essential for getting along on the social network. True, this discussion may seem quixotic, but the subject is quite serious. Every quarter, Facebook personnel show how content removal actions were performed in their Community  Standards Enforcement Report.

Okay, but what exactly is a fake account?

According to the section entitled Spam, which is item 18 of Facebook’s Community Standards, fake accounts will be removed if they are created or used to: 

  • Impersonate or pretend to be a business, organization, public figure, or private individual  
  • Attempt to create connections, create content, or message people.

That little rule above is part of a group of items under Integrity and Authenticity, which also includes number 19: Misrepresentation. Here they speak of a user’s inauthentic behavior and, among other items, the creation of fake accounts with fake names, or those that aim to deceive people in any of various ways.

Finding infractions before they’re reported

Facebook is nothing if not proactive. Of all the accounts removed in the first quarter, only 0.2% of them were reported by users. In the last quarter of 2018 that number was 0.3%. Once again, that shows how Facebook is becoming increasingly engaged in maintaining its standards in regard to fake account creation, so that they can remove it as quickly as possible.


Not just fake accounts: Other content removal

The most recent Community Standards Enforcement report includes some new items not found in previous versions. This is the first time Facebook is defining the appeals process for a removal decision, and they have published data regarding corrective procedures in cases of undue removal. New information was also reported regarding the category of artifacts, such as drugs and weapons that are regulated by law. In addition, we must not forget that illegal sales and data leaks occur therein.

The report includes another eight categories, in addition to “fake accounts,” such as terrorist propaganda, hate speech, bullying, pedophilia and other, heavier kinds of infractions. But the greatest number, along with “fake accounts,” refers to spam.


That despicable spam!

Spam is a generic term that defines inauthentic content, or behavior that violates the Facebook Community Standards. Generally, the intent is financial gain. In the report, the numbers of spam removals are very interesting. From January to March of 2019, there were 1.8 billion removals—the same number as reported in the previous quarter!

And, just as with other kinds of infractions, spam removals are growing exponentially. In comparison with the second quarter of 2018, when the total incidences of spam detected was 959 million, the most recent data show that the number doubled (!) within just six months.

As far as allegations made by users who claimed to be unduly labeled as spam, the first quarter of 2019 counted 20.8 million appeals. However, the number that were in fact restored upon appeal was only 5.7 million. In other words, it’s not enough just to complain—the Facebook court is at full throttle, separating those objections that were well-founded from those that were, in fact, erroneous.


What is the future of content removal?

As Facebook has clearly demonstrated, more and more abusive content will be removed from the platform as the mechanisms and strategies for identifying infractions become increasingly refined. And this seems to be a trend among the giants of the Internet. Google is now even using machine learning techniques to uncover violations of their content policies.

If you want to guard the reputation of your brand or company in an environment as extensive as the Internet, it would be a good idea to invest in monitoring and protection strategies. Here at Axur, we have a variety of solutions to find and remove digital risk with the help of our robots. One good option is our specific solution for fake social profiles!



Eduardo Schultze, Coordenador do CSIRT da Axur, formado em Segurança da Informação pela UNISINOS – Universidade do Vale do Rio dos Sinos. Trabalha desde 2010 com fraudes envolvendo o mercado brasileiro, principalmente Phishing e Malware


Andre Luiz R. Silva

A journalist working as Content Creator at Axur, in charge of Deep Space and press activities. I have also analyzed lots of data and frauds here as a Brand Protection team member. Summing up: working with technology, information and knowledge together is one of my biggest passions!