Brand Abuse, Sales Abuse

How does Facebook Protect Business and Brand Content?

By Matheus G. Loyola on

Facebook and Instagram have established content protection rules in their terms of service in order to protect their communities. According to these rules, it’s prohibited to publish content that infringes on third-party intellectual property. 

Based on these protection policies, ordinary users and intellectual property owners may submit complaints to these platforms regarding content that infringes on their copyright. If the complaint is valid, the area responsible for validating complaints is committed to acting promptly to remove the reported content. 


Brand Content Incidents on Facebook: three categories

Facebook categorizes intellectual property incidents into three scopes of complaints. They are:

Copyright ©

Copyright or authorial right is the legal right to protect original works and content, such as books, music, movies and art, among other formats. Copyright law does not prohibit the "fair" use of an author’s expression (e.g., parodies).

Trademark ™

A trademark or registered trade name may be a word, slogan, symbol or design that distinguishes certain products or services offered by a company. Using the brand to criticize it or talk about its products is not characterized as an infringement of the trademark.


A counterfeit product is usually a replica or a copy of a product from another company. Typically, the name, logo (trademark) and/or features of the genuine product are copied. The manufacture, promotion or sale of counterfeits is a type of trademark infringement.

With these parameters defined, Facebook has worked up its community guidelines, with the object of clarifying what may not be published. It is hoped that the platform might be a safer environment for content creators and users in general. In these communities, there are two practices that most concern business marketing teams: Spam and Fake Accounts.



Spam is an automated activity carried out through bots or scripts. It is generally coordinated through the use of multiple fake accounts with the intent of promoting dubious content. Spam types include commercial spam, misleading advertising, larceny, phishing pages and the promotion of counterfeit goods.

Facebook relies on spam detection technology, complaints from users, and Digital Risk Protection (DRP) companies. The object of the platform is to track down spam and take countermeasures before it’s disseminated. Using this framework, Facebook was able to remove more than 1.8 billion instances of spam that were circulating on the social network in the first quarter of 2019 alone.


Fake accounts

Usually, "fake profiles" are the ground-zero for other violations of social media standards. Therefore, Facebook takes action proactively to block and remove fake pages. Users often create fake profiles and pages in order to hide their identity. In many cases, users who do this are malicious, using bots and scripts to create large-scale fake accounts aimed at spreading spam and/or implementing scams.

In the first quarter of 2019 alone, more than 2.2 billion fake profiles and pages were removed. These removals were based largely on user complaints and DRP companies focused on brand protection.

During the last half of 2018, Facebook counted:

  • 337,100 copyright complaints (averaging 56,183 notices per month)
  • 44,600 complaints concerning registered trademarks (averaging 7,433 per month)
  • 37,500 complaints about counterfeits (averaging 6,250 notices per month)

In that same survey, Facebook revealed the removal rate per the number of complaints received:

  • Copyright complaints: 74%
  • Registered trademarks: 52%
  • Counterfeits: 85%


Social Network Removal Process

This data confirms the care that Facebook takes with its community and the content creators who use it, critically analyzing the truthfulness and quality of the complaints received. Facebook works not only to help the content owner, but also to raise the awareness of those whose content is reported, employing a global intellectual property operations team that reviews reports and removes content when so justified. Human analysis is important to ensure that the removals are valid and to avoid reports that may be fraudulent, erroneous or made in bad faith.

When content is removed, the user is notified and receives a portion of the report that tells why the content was removed. In addition to offering a timeframe for reviewing complaints, Facebook in some cases offers an opportunity for the owner of the reported content to challenge a content-removal decision (e.g., a counter notification using the Digital Millennium Copyright Act, or DMCA.)

Facebook believes that educating users about their intellectual property practices can help them make better decisions about sharing content on platforms. The company seeks to maintain a safe and propitious environment for its users and content creators.

Still, due to the large volume of complaints that the platform receives daily, a brand can run serious risks on Facebook. That’s why hiring a Digital Risk Protection solution — one whose expertise can be used to send prompt and accurate reports — is attractive. Social media risks run by businesses and customers are therein minimized.



Eduardo Schultze, Coordenador do CSIRT da Axur, formado em Segurança da Informação pela UNISINOS – Universidade do Vale do Rio dos Sinos. Trabalha desde 2010 com fraudes envolvendo o mercado brasileiro, principalmente Phishing e Malware


Matheus G. Loyola

Undergraduate in Audiovisual Production, specialized in Marketing and also a photographer. I worked in all those areas at Axur, and I have now found myself (and my love) as a member of the Sales team.