Meta’s $16 Billion Secret: How Facebook and Instagram Are Profiting from Scam Ads
The Hidden Fortune Behind Your Feed
Meta, the parent company of Facebook, Instagram, and WhatsApp, is quietly earning billions from an unexpected source — fraudulent advertising. Internal company documents reveal that Meta projected around 10% of its annual revenue, or roughly $16 billion, came from ads promoting scams, banned goods, and illegal activities.
That means every time users scroll through their feed, a surprising chunk of Meta’s profits could be coming from deceptive or outright illegal ads slipping through the cracks.
For at least three years, Meta has struggled — or failed — to stop a tidal wave of fraudulent ads. These ads have pushed everything from fake investment opportunities and counterfeit goods to banned medical products and illegal online gambling.
Billions of Scam Ads Flood Meta Platforms
The numbers are staggering. Meta’s systems identified billions of high-risk ads every day — many of which still reached users. Reports suggest that as many as 15 billion scam ads were served daily across its apps, with an additional 22 billion organic scams (like fake listings or impersonations) circulating without payment.
In other words, scams have become part of the digital landscape of Meta’s platforms. Users scrolling through Instagram Stories or Facebook feeds are exposed to fake ads at a scale most can’t even imagine.
Why Meta Let It Happen
A Weak Detection System
Meta’s automated tools are designed to catch and block fraudulent advertisers. But here’s the problem: those tools only act if the system is 95% certain that an ad is a scam. Anything below that threshold usually stays live.
Instead of blocking suspicious advertisers, Meta often just charged them higher ad rates, effectively profiting more from risky campaigns.
So, the less confident Meta was that an ad was fake, the more money it stood to make — a setup that critics say prioritizes profits over safety.
The Revenue vs. Integrity Dilemma
Internal reports show Meta managers placed strict limits on how much revenue the company could lose by cracking down too hard on scams. Teams were told not to take enforcement actions that would cost more than 0.15% of total revenue.
That might sound small, but when your company earns over $160 billion annually, that’s a massive incentive to look the other way.
Essentially, Meta seemed to set a cap on how much it was willing to sacrifice to protect users — and it wasn’t much.
Loopholes and Weak Rules
Many scam ads cleverly sidestepped Meta’s policies. For example, fake investment promotions might not explicitly mention illegal terms, allowing them to pass through automated filters.
In one audit, law enforcement shared dozens of scam examples with Meta, and the company concluded that less than a quarter of them technically broke its written rules.
That shows the policy gap: Meta’s enforcement often focused on the exact words used, not the real-world harm caused.
Real Stories of Real Damage
The impact of Meta’s loose controls is felt by real people every day. Victims have reported losing savings to fraudulent investment ads, counterfeit e-commerce stores, and deepfake celebrity endorsements.
One young professional discovered that her Facebook account was hacked and used to promote crypto scams under her name. Friends and colleagues clicked the fake ads, thinking they were genuine recommendations — and lost thousands.
Each click, share, or comment on these fraudulent ads amplifies the problem, making scams spread even faster.
Meta’s Response
When confronted with the internal figures, Meta claimed the $16 billion estimate was exaggerated and that the documents painted a “distorted picture.” The company says it’s taking aggressive action against scams, reporting that it removed more than 134 million fraudulent ads and cut scam reports by more than half in the last 18 months.
Still, critics argue those numbers barely scratch the surface. Even after the removals, the daily flood of scam content remains overwhelming, and enforcement continues to lag far behind.
What’s Meta Doing Next?
Internal plans show that Meta aims to reduce the share of scam-related ad revenue over the next few years. The company’s goal is to cut that percentage from around 10% to under 6% by 2027.
That’s a positive sign, but progress may be slow. Regulators around the world are already increasing scrutiny and considering fines for platforms that fail to stop fraudulent ads.
Meta’s growing reliance on AI to screen advertising might help, but as scammers adopt more sophisticated tactics — including deepfake technology — detection will only get harder.
Why It Matters to You
If you use Facebook, Instagram, or WhatsApp, you’ve almost certainly seen or interacted with a scam ad — even if you didn’t realize it. These ads often look polished and professional, featuring familiar logos, fake celebrity endorsements, or tempting offers.
Clicking on them can lead to identity theft, lost money, or malware. Even if you don’t fall for them, simply engaging with those posts tells the algorithm that you’re interested, leading to more similar ads showing up in your feed.
Advertisers also face risks. Brands that appear next to shady or fraudulent ads can lose credibility, and genuine campaigns may struggle to stand out in an increasingly untrustworthy environment.
The Bigger Picture
What’s happening inside Meta is a reflection of a larger problem with online advertising. Platforms earn money from every click, regardless of whether the ad is real or fake. That creates a built-in conflict of interest — between protecting users and maximizing profit.
Meta’s situation shows how massive tech companies can become dependent on questionable revenue streams without fully realizing the ethical cost. As long as scam ads continue to be profitable, the incentive to eliminate them completely will remain weak.
The challenge now is whether Meta will truly prioritize safety and integrity over short-term gains — or continue to profit quietly from a system built to reward engagement at any cost.