🌍 Your knowledge portal
Technology

Meta Disables Over 150,000 Accounts Used in Digital Scams Worldwide

📅 2026-03-12⏱️ 11 min read📝

Quick Summary

Meta removed over 150,000 fraudulent accounts from Facebook, Instagram and WhatsApp used in financial fraud and romance scam schemes. Learn how to protect yourself.

Meta, the parent company of Facebook, Instagram, and WhatsApp, announced in March 2026 the mass removal of over 150,000 accounts used in coordinated digital scam schemes. The operation, described by the company as the largest anti-fraud cleanup action ever carried out on its platforms, dismantled criminal networks operating in more than 40 countries, causing estimated losses of billions of dollars to victims around the world.

The action is the result of months of investigation involving artificial intelligence teams, specialized human analysts, and cooperation with law enforcement agencies in multiple countries. The identified scams include sophisticated financial fraud, fake romance schemes known as "catfishing," fraudulent cryptocurrency investment offers, and phishing campaigns that imitated legitimate brands with near-perfect precision.

The Scale of the Problem #

Meta's security operations center using AI to detect fraudulent accounts

The operation's numbers reveal the alarming scale of the digital scam problem on Meta's platforms:

  • Accounts removed: over 150,000
  • Countries affected: more than 40
  • Criminal groups identified: 87 distinct networks
  • Fraudulent pages deactivated: over 12,000
  • Fraudulent ads removed: over 50,000
  • Estimated victim losses: $4.7 billion in 2025-2026
  • Victims identified: estimated 2.3 million people

Meta stated that it used a combination of advanced artificial intelligence models and human investigation to identify patterns of suspicious behavior. The AI algorithms were trained to detect signals such as mass account creation, use of photos stolen from other people, repetitive messaging patterns, and attempts to redirect conversations to external payment platforms.

The Most Common Types of Scams #

Infographic showing the most common types of online scams

Romance Scams (Catfishing) #

Romance scams represent approximately 35% of the cases identified in the operation. Criminals create fake profiles using attractive photos stolen from other people — frequently from models, military personnel, or successful professionals — and initiate virtual relationships with victims, generally lonely people or those in situations of emotional vulnerability.

After weeks or months of building trust, scammers invent personal or professional emergencies requiring money: medical bills, airline tickets to "finally meet," or fictitious legal problems. The victims, emotionally invested in the relationship, end up transferring significant amounts — in some cases, tens of thousands of dollars — before realizing they have been deceived.

A particularly shocking case involved a 67-year-old woman in São Paulo who transferred over $75,000 over 14 months to a supposed American military officer stationed in Iraq. The fake profile was identified as part of a network of 42 accounts operated by a criminal group based in Nigeria, demonstrating the international reach and sophistication of these operations.

Cryptocurrency Investment Fraud #

Representing 28% of identified scams, investment frauds promise extraordinary returns in cryptocurrencies or other financial assets. Scammers create Facebook groups and Instagram channels that mimic legitimate investment platforms, using fabricated testimonials and falsified screenshots of supposed profits to attract victims.

Many of these schemes use the "quick profit" tactic: the victim is encouraged to make a small initial investment and, miraculously, receives a real return — paid by the scammers with money from other victims. Confident in the system, the victim then invests much larger amounts, which are immediately diverted by the criminals. This is essentially a digital Ponzi scheme operating on a global scale, and the ease with which such schemes can be propagated through social media has made them exponentially more dangerous than their historical predecessors.

Fake Tech Support #

About 20% of scams involved criminals impersonating technical support representatives from companies like Meta itself, banks, telephone operators, and technology companies. Victims received urgent messages alerting about "security problems" in their accounts and were directed to fake websites that collected their login credentials and banking information. These sophisticated phishing operations often used domains virtually identical to legitimate websites, making it extremely difficult for average users to distinguish between real and fraudulent communications.

Phishing and Identity Theft #

Phishing scams, representing 17% of cases, used pages that perfectly imitated legitimate websites of banks, online stores, and even government agencies. Links were distributed through Messenger and WhatsApp messages, exploiting the trust people place in messages received from "friends" whose accounts had been previously compromised.

The Technology Behind Detection #

Elderly person concerned while looking at phone with suspicious messages

Meta has invested heavily in fraud detection technology in recent years, and the March 2026 operation demonstrates the potential of these tools. Technologies employed include reverse facial recognition systems to identify stolen photos, AI-based behavioral analysis to detect interaction patterns typical of scammers, and natural language processing to identify scam scripts in dozens of languages.

The company revealed that its AI models can identify a fraudulent account with 97% accuracy within 48 hours of its creation. However, scammers also constantly evolve, using increasingly sophisticated techniques to bypass detection systems, including the use of deepfakes, VPNs to mask locations, and even artificial intelligence to generate more convincing messages. The arms race between platform security teams and criminal organizations represents one of the most significant technological battles of our era.

The Role of AI in Fraud #

Ironically, the same technology Meta uses to combat fraud is being used by the scammers themselves. Generative AI tools are being employed to create extremely realistic profile photos that do not correspond to any real person, write personalized messages in multiple languages with native fluency, and even generate deepfake audio and video that can be used in video calls to convince victims they are dealing with real people.

The Global Impact of Digital Scams #

The losses caused by digital scams have reached alarming proportions on a global scale. According to the Federal Bureau of Investigation (FBI), Americans lost over $12.5 billion to online fraud in 2025 — a 22% increase over the previous year. In Brazil, Central Bank data indicates that losses from digital fraud exceeded $1.5 billion in 2025, representing annual growth of 30% over the last three years. These staggering numbers reveal that digital fraud has become one of the most profitable criminal enterprises in the world, surpassing many traditional forms of organized crime.

Who Are the Victims? #

Contrary to the stereotype that only elderly people fall for online scams, Meta's data reveals that victims of digital fraud are distributed across all age groups:

  • 18-34 years: 31% of victims (mainly investment fraud)
  • 35-54 years: 38% of victims (mix of financial and romance fraud)
  • 55+ years: 31% of victims (predominantly romance scams and fake support)

The 35-54 age group is particularly vulnerable because it combines sufficient familiarity with technology to use the platforms, but with less awareness of the specific risks of each type of scam. Young people, in turn, are especially susceptible to promises of quick wealth in cryptocurrencies and "extra income" opportunities that turn out to be illegal schemes.

How to Protect Yourself #

Digital safety tips with smartphone protected by shields

Meta published, along with the operation announcement, an updated security guide for users. Cybersecurity experts recommend the following practices to protect against social media scams:

1. Be Suspicious of Unknown Contacts #

Never accept friend requests from people you do not know personally. Scammers frequently create attractive profiles with photos stolen from other people to initiate contact. Check the age of the account, the number of publications, and the consistency of the profile before interacting with any unknown contact.

2. Enable Two-Factor Authentication (2FA) #

Two-factor authentication adds an extra layer of security to your account, requiring an additional code beyond your password to log in. Even if a scammer obtains your password, they will not be able to access your account without the second authentication factor. This simple step can prevent the vast majority of unauthorized account access.

3. Never Send Money to Strangers #

No legitimate person you met online will request money transfers. This is the golden rule of digital security: if someone you met on the internet asks for money, it is a scam from which you need to distance yourself immediately, no matter how convincing the story may seem.

Before clicking on any link received by message, verify the complete URL. Fraudulent links frequently use domains that resemble legitimate sites but contain small differences — such as "faceboook.com" instead of "facebook.com." Taking a moment to verify can save you from devastating financial and personal consequences.

5. Report Suspicious Activity #

All Meta platforms have reporting tools to flag suspicious accounts, messages, or ads. Reporting not only protects you but helps Meta identify and remove fraudulent accounts more quickly, protecting other users across the entire platform ecosystem.

Platform Responsibility #

The Meta cleanup operation, while significant, raises important questions about the responsibility of social media platforms in fraud prevention. Critics argue that companies like Meta profit from the engagement generated by fraudulent accounts and should invest more in proactive prevention rather than reactive actions that occur only after millions of people have already been harmed.

Legislators in several countries are pushing for stricter regulations requiring technology companies to verify user identity and compensate scam victims. The European Union is considering expanding the Digital Services Act to include direct financial liability for platforms. In the United States, debate over reforming Section 230 of the Communications Decency Act has intensified, with some arguing that the broad immunity it provides is no longer appropriate given the scale of social media fraud.

In Brazil, the Marco Civil da Internet and the LGPD establish limited obligations. Australia's Online Safety Act and the United Kingdom's Online Safety Bill represent more aggressive approaches, requiring platforms to take proactive measures with substantial fines for non-compliance. India's IT Act focuses on traceability, requiring platforms to identify the original sender of certain messages.

Notable Cases #

The Fake Pix Scam in Brazil #

A network of 23 fake accounts operated Facebook groups offering electronics at irresistible prices, all requesting advance Pix payment. Over 4,000 Brazilians fell victim between January and March 2026, with total losses estimated at $2.3 million. The scammers used fake business registration numbers and websites copied from legitimate stores.

The Forex Pyramid in Latin America #

An investment scheme in forex operating through WhatsApp and Instagram groups, promising returns of 30% per month, collected over $8 million from approximately 8,000 investors before being dismantled. The scheme's leaders were identified by Brazilian Federal Police with Meta's assistance and face charges of qualified fraud, money laundering, and criminal organization.

Psychological Manipulation Techniques #

Modern scammers use sophisticated psychological manipulation techniques, many based on scientific persuasion principles identified by researchers like Robert Cialdini. Understanding these techniques is fundamental to protection:

Artificial Urgency #

Scammers create a sense of urgency to prevent victims from thinking rationally. Phrases like "this offer expires in 30 minutes" or "your friend needs help now" are designed to trigger emotional responses that bypass critical thinking. Time pressure is one of the most effective tools of fraudsters, forcing impulsive decisions without time for verification or consultation with third parties.

Fabricated Social Proof #

Fake testimonials, fabricated screenshots of supposed profits, and groups inflated with fake members create an illusion of legitimacy. When victims see dozens of people "confirming" the results of an investment or product, the natural tendency is to trust the consensus, even if it is entirely fabricated. Scammers go as far as creating complete ecosystems of fake accounts that interact with each other to simulate an active and thriving community, making potential victims feel safe when they see apparent consensus.

Reciprocity #

Scammers frequently offer something first — an initial "profit" on an investment, a virtual gift, or simply attention and affection in the case of romance scams — to create a sense of obligation in the victim. This principle of reciprocity is deeply rooted in human psychology and makes it much more difficult to refuse subsequent requests for money.

Isolation #

In romance scams and some investment schemes, scammers attempt to isolate the victim from friends and family who might alert them to the fraud. They may suggest the relationship or investment should be kept "secret" or that others "wouldn't understand" the opportunity. This isolation removes the social safety net that would normally protect the victim from harmful financial decisions.

The Future of Digital Security #

Meta golpes digitais - Imagem 5

The removal of 150,000 fraudulent accounts is a significant step, but represents only a fraction of the problem. New fraudulent accounts are created daily, and the constant evolution of scam techniques means the fight against digital fraud is an endless arms race. Meta announced plans to invest $2 billion in security and fraud fighting in 2026, including hiring 5,000 additional security specialists and developing new AI models capable of detecting deepfakes and manipulation patterns in real time.

Artificial intelligence is paradoxically both the most powerful weapon in the fight against scams and the most dangerous tool in the hands of scammers. The ability to generate realistic human faces that do not exist, synthetic voices indistinguishable from real voices, and increasingly convincing deepfake videos is destroying the last barrier of trust people had in digital communications: visual and auditory evidence. In the near future, seeing someone on a video call may no longer be sufficient to confirm their identity.

Meta and other technology companies are exploring the use of blockchain for identity verification, digital watermarks on AI-generated content, and decentralized reputation systems that could help identify trustworthy accounts. However, each new security measure generates a race for scammers to develop new ways to circumvent it, creating an endless cycle of attack and defense.

Effective protection against digital scams requires a combined approach: advanced detection technology from platforms, digital education for users, adequate legislation from governments, and international cooperation between law enforcement agencies. While any of these pillars remains weak, scammers will continue finding gaps to exploit vulnerable people around the world. The battle for digital security is ultimately a battle for trust — and restoring that trust in social platforms will be the biggest challenge of the next decade.

Digital education must begin in schools, with programs that teach children and teenagers to recognize online manipulation attempts. Adults and elderly people need continuous awareness campaigns informing them about the latest scams and tactics used by criminals. Collaboration between banks, telecommunications operators, and digital platforms to create real-time cross-alerts when suspicious transactions are detected can also prevent millions of dollars in losses every year.

Financial institutions in several countries are already implementing AI-powered systems that flag suspicious transactions and alert customers before transfers are completed. In Australia, a pilot program that required banks to verify the recipient's identity before processing large transfers reduced scam losses by 40% in its first year. Similar initiatives in the United Kingdom and Singapore have shown promising results, suggesting that cross-sector cooperation between technology companies, financial institutions, and law enforcement may be the most effective approach to combating the global scam epidemic.

The ultimate goal is creating a digital ecosystem where trust can be established through verifiable credentials and transparent communication channels, making it exponentially harder for scammers to operate while preserving the openness and connectivity that make social media valuable to billions of people worldwide. Only through sustained investment in education, technology, regulation, and international cooperation can we hope to turn the tide against the ever-evolving threat of digital fraud that impacts millions of lives every year.


Sources: Meta Transparency Center, FBI Internet Crime Complaint Center (IC3), Central Bank of Brazil, ANPD, Federal Police, European Commission Digital Services Act, Reuters, Bloomberg, Kaspersky Cybersecurity Report 2026

📢 Gostou deste artigo?

Compartilhe com seus amigos e nos conte o que você achou nos comentários!

Frequently Asked Questions

Contrary to the stereotype that only elderly people fall for online scams, Meta's data reveals that victims of digital fraud are distributed across all age groups: - 18-34 years: 31% of victims (mainly investment fraud) - 35-54 years: 38% of victims (mix of financial and romance fraud) - 55+ years: 31% of victims (predominantly romance scams and fake support) The 35-54 age group is particularly vulnerable because it combines sufficient familiarity with technology to use the platforms, but with less awareness of the specific risks of each type of scam. Young people, in turn, are especially susceptible to promises of quick wealth in cryptocurrencies and "extra income" opportunities that turn out to be illegal schemes.

Receba novidades!

Cadastre seu email e receba as melhores curiosidades toda semana.

Sem spam. Cancele quando quiser.

💬 Comentários (0)

Seja o primeiro a comentar! 👋

📚Read Also

IA Já Faz Compras Sozinha: Visa e Mastercard Completam Primeiros Pagamentos por Agentes AutônomosTechnology

IA Já Faz Compras Sozinha: Visa e Mastercard Completam Primeiros Pagamentos por Agentes Autônomos

Visa and Mastercard confirm real transactions made by AI agents. Learn how agentic commerce works and what changes in your life.

⏱️11 minLer mais →
Nanolaser da DTU Pode Cortar Consumo de Energia dos Computadores pela MetadeTechnology

Nanolaser da DTU Pode Cortar Consumo de Energia dos Computadores pela Metade

Denmark creates laser smaller than light can confine. Technology promises to replace copper wires with photons and reduce consumption by 50%.

⏱️11 minLer mais →
Airbus Tests Digital Technology That Fights Wildfires with 5-Meter Precision and Reduces Response Time by 70%Technology

Airbus Tests Digital Technology That Fights Wildfires with 5-Meter Precision and Reduces Response Time by 70%

Airbus completed tests of a digital system connecting aircraft, helicopters, drones, and ground firefighters to combat wildfires. Water drop accuracy improved from 50 to 5 meters.

⏱️10 minLer mais →
Quantum Battery: Australia Creates First Functional Prototype That Charges Faster the Bigger It GetsTechnology

Quantum Battery: Australia Creates First Functional Prototype That Charges Faster the Bigger It Gets

Australian scientists at CSIRO demonstrate the world's first functional quantum battery. Quantum principle inverts logic: the bigger the battery, the faster it charges.

⏱️10 minLer mais →
Fujitsu and Osaka Reach Quantum Computing Milestone: Chemical Calculations in Minutes, Not YearsTechnology

Fujitsu and Osaka Reach Quantum Computing Milestone: Chemical Calculations in Minutes, Not Years

Fujitsu and University of Osaka develop technology that reduces quantum molecular energy calculations from years to minutes. Impact on drugs and materials.

⏱️3 minLer mais →
ARM Enters Chip Production and Launches CPU for AGI in AI Data CentersTechnology

ARM Enters Chip Production and Launches CPU for AGI in AI Data Centers

ARM announces its first self-produced silicon processor aimed at agentic AI and AGI in data centers. Historic shift in the semiconductor industry.

⏱️4 minLer mais →