Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesEarnSquareMore
Cloudflare Outage: Exposing the Pseudo-Decentralization of the Crypto Industry

Cloudflare Outage: Exposing the Pseudo-Decentralization of the Crypto Industry

ForesightNews 速递ForesightNews 速递2025/11/21 15:06
Show original
By:ForesightNews 速递

Four major outages in 18 months—why is the centralization dilemma so hard to resolve?

Four Major Outages in 18 Months: Why Is the Centralization Dilemma So Hard to Solve?


Source: rekt news

Translation: Saoirse, Foresight News


November 18, 2025, 6:20 AM Eastern Time. Many of us experienced a network outage.


It was not a gradual interruption, nor was there any warning. One moment you were scrolling on your phone, trading, or chatting with AI; the next, all you could see were 500 error pages everywhere.


Twitter crashed mid-tweet, ChatGPT stopped responding halfway through a conversation, and Claude simply froze.


Even Downdetector—the site you check when every other platform is down—couldn’t load, unable to tell you that “all services are down.”


20% of the world’s internet simply vanished, all because Cloudflare, which is supposed to protect the internet from attacks, accidentally “attacked” itself.


A routine configuration change (database permission update) triggered a hidden bug in its bot protection system, and in an instant, this “gatekeeper” locked everyone out.


In October, when Amazon Web Services (AWS) caused Coinbase to go offline, crypto Twitter was busy mocking the pitfalls of “centralization.”


But when the Cloudflare outage hit in November? At least for the first few hours, the entire crypto community was silent.


After all, when the infrastructure that Twitter relies on is down, you can’t even discuss “infrastructure fragility” on Twitter.


Multiple critical services stalled (transportation systems crashed), some companies’ web interfaces failed, and blockchain explorers like Arbiscan and DeFiLlama also showed 500 errors—yet there was no sign of consensus failure on the blockchains themselves.


When your so-called “decentralized” revolution can be halted by a single company’s oversized configuration file, who’s really in control?


Outage Timeline: From “Configuration Change” to “Total Network Paralysis”


UTC 11:05: Database access control change deployed.


23 minutes later, at UTC 11:28, the change reached user environments, and the first error logs appeared in users’ HTTP traffic.


In other words: the outage had already happened, but no one knew what was wrong at the time.


By UTC 11:48, Cloudflare’s official status page finally admitted to “internal service failures”—corporate speak for: “Everything’s a mess, and everyone can see it.”


The chain reaction was sudden: the change broke Cloudflare’s bot management layer, and when the system loaded a function file that had doubled in size, its proxy service simply crashed.


Downstream systems collapsed: Workers KV (key-value storage service) and Access (access control service) couldn’t connect to the proxy; error rates across the network soared, and as monitoring tools became overloaded, CPU usage spiked.


Traffic kept pouring into Cloudflare’s edge nodes—but the proxy service could no longer respond.


At first, Cloudflare thought it was under attack—an ultra-large-scale distributed denial-of-service (DDoS) attack.


Even stranger, the official status page, which is hosted entirely outside Cloudflare’s infrastructure, also went down, leading engineers to suspect a coordinated attack on both its core systems and monitoring infrastructure.


But that wasn’t the case. There was no external attack—the problem was internal.


Soon after services were restored, Cloudflare CTO Dane Knecht issued a public apology, calling the incident “completely unacceptable” and blaming a routine configuration change—which triggered the collapse of the bot protection layer.


“We let down our customers and the broader internet community,” Knecht wrote in the statement. “A latent bug in a service supporting our bot protection function crashed after a routine configuration change, causing widespread outages across our network and other services. This was not an external attack.”


At the peak of the outage, Downdetector received 11,183 reports.


This “digital blackout” lasted over five and a half hours, with full service restored at UTC 17:06; however, the worst impact was mitigated as early as 14:30, after the correct bot management configuration file was deployed globally.


Impact of the Outage: From Web2 to Crypto, No One Spared


Web2 Platforms Hit First


X platform received 9,706 outage reports.


Users saw not their familiar timeline, but an “Oops, something went wrong” error message.


ChatGPT suddenly “went silent” mid-conversation, no longer responding to any commands.


Spotify’s streaming service was interrupted, Canva locked designers out, and both Uber and Door Dash (food delivery platforms) experienced malfunctions.


Even gamers weren’t spared—League of Legends players were forcibly disconnected mid-game.


There were even reports that McDonald’s self-service kiosks displayed error screens—right during the lunch rush and infrastructure outage.


The crypto sector was not immune either.


Crypto Platforms Suffer Widespread Outages


Coinbase’s frontend completely crashed, leaving users with an unresponsive login page.


Kraken’s web and mobile apps both went down—a direct result of Cloudflare’s global outage.


BitMEX posted on its status page: “Investigating the cause of the outage, platform performance degraded, but user funds are safe.” Same script, different exchange.


Etherscan couldn’t load, and Arbiscan went completely offline.


DeFiLlama’s analytics dashboard intermittently showed internal server errors.


Even Ledger issued a notice stating that some services were degraded due to the Cloudflare outage.


The Only “Exception”: Blockchain Protocols Themselves


But the following systems were unaffected:


Reportedly, major exchanges like Binance, OKX, Bybit, Crypto.com, and KuCoin did not experience frontend failures, and on-chain transactions proceeded as normal—meanwhile, the blockchains themselves continued to operate perfectly, with no sign of consensus failure.


Blockchain protocols always run independently—the problem wasn’t on-chain, but in the Web2 infrastructure people use to access the blockchain.


If the blockchain is still running but no one can access it, is crypto really “online”?


In-Depth Analysis: How Did a Database Query Crash 20% of the Internet?


Cloudflare doesn’t host websites, nor does it provide cloud servers like AWS.


Its role is that of a “middleman”—standing between users and the internet, serving 24 million websites, and processing 20% of global internet traffic through nodes in 120 countries and 330 cities.


Cloudflare’s marketing pitch is this: it positions itself as the “shield and accelerator of the internet,” providing 24/7 DDoS protection, bot protection, traffic routing, a global web application firewall (WAF), TLS termination, edge computing based on Workers, and DNS services—all running on a unified “security-performance” network.


In reality: it holds 82% of the DDoS protection market, its edge nodes have a total bandwidth of 449 Tbps, and it’s connected to many of the world’s leading ISPs and cloud providers.


The core issue: when the intermediary fails, all services behind it become “out of reach.”


Cloudflare CTO Dane Knecht was blunt on X:


“Let me be clear: earlier today, due to issues with the Cloudflare network, a large amount of traffic that relies on us was affected. We let down our customers and the broader internet community.”


CEO Matthew Prince was even more direct:


“Today was Cloudflare’s worst outage since 2019... In the past six years, we’ve never had an outage that prevented most core traffic from passing through our network.”


The Technical Root of the Outage


It all started with a routine database permission update. At UTC 11:05, Cloudflare made a change to its ClickHouse database cluster to improve security and reliability—allowing users who previously had “implicit access” to now “explicitly” see table metadata.


Where did things go wrong? The database query that generated Cloudflare’s bot protection service configuration file didn’t filter by “database name.”


The query managing threat traffic began returning duplicate entries—one from the default database, another from the underlying r0 storage database. This doubled the size of the function file, from about 60 features to over 200.


Cloudflare had hardcoded a 200-feature upper limit for memory preallocation, thinking “this is far above our current usage of about 60 features.” This is classic engineering thinking: set what you think is a “generous” safety margin—until the unexpected happens.


The oversized file triggered this limit, and the Rust code crashed with the error: “thread fl2_worker_thread panicked: called Result::unwrap() on an Err value.”


The bot protection system is a core component of Cloudflare’s control layer. Once it crashed, the health check system that tells the load balancer “which servers are healthy” also failed.


Worse: this configuration file is regenerated every five minutes.


Only when the query ran on “updated cluster nodes” would it generate erroneous data. So every five minutes, Cloudflare’s network would flip between “normal” and “outage”—sometimes loading the correct file, sometimes the wrong one.


This “flip-flopping” led engineers to believe they were under a DDoS attack—internal errors don’t usually cause systems to “recover and crash” in cycles.


Eventually, all ClickHouse nodes were updated, and every file generated was wrong. The “flip-flopping” stopped, replaced by “complete, stable failure.”


Without accurate system signals, the system defaulted to “conservative mode,” marking most servers as “unhealthy.” Traffic kept coming in but couldn’t be routed correctly.


Cloudflare’s edge nodes could receive user requests—but couldn’t process them.


“This was not an external attack,” Knecht repeatedly emphasized. “There was no malicious activity, nor was it a DDoS attack. It was just a database query missing a filter, coinciding with a permission update, that ultimately caused the outage.”


Cloudflare had promised “99.99% availability”—but this time, the promise was not kept.


That’s the reality.


History Repeats: Four Major Outages in 18 Months—Why Is the Centralization Dilemma So Hard to Solve?


October 20, 2025—AWS outage lasts 15 hours. DynamoDB database DNS resolution fails in US East 1, causing Coinbase to freeze, Robinhood to lag, Infura to go down (affecting MetaMask), and Base, Polygon, Optimism, Arbitrum, Linea, Scroll, and other blockchain networks to go offline. Although user funds were safe on-chain, many people saw their account balances as “zero.”


October 29, 2025—Microsoft Azure outage. Azure Front Door (frontend gateway) has configuration sync issues, taking Microsoft 365 offline, Xbox Live down, and enterprise services interrupted.


July 2024—CrowdStrike (security company) Windows update package has a bug. The outage grounded flights, delayed hospital procedures, froze financial services, and took days to fully recover.


June 2022—Cloudflare’s previous major outage. Multiple crypto exchanges were forced to suspend services—the same pattern, just a different year.


July 2019—An earlier Cloudflare outage. Coinbase went down, CoinMarketCap was inaccessible—this was the first “warning sign” everyone ignored.


Four major infrastructure outages in just 18 months.


Four outages, one lesson: centralized infrastructure inevitably leads to “centralized failures.”


Four outages could have accelerated the crypto industry’s shift to decentralization—yet it still relies on infrastructure from three companies.


How many warnings will it take for the industry to shift from “assuming outages might happen” to “building systems on the assumption that outages will happen”?


The “Lie” of Decentralization: Protocol Decentralization Doesn’t Mean Decentralized Access


They once painted you this blueprint:


“Decentralized finance, censorship-resistant currency, trustless systems, no single point of failure, ‘not your keys, not your coins,’ code is law.”


The reality of November 18 delivered a harsh blow: a single morning’s outage at Cloudflare left parts of the crypto industry offline for hours.


The technical truth:

No blockchain protocol was reported to have failed. The Bitcoin network ran normally, as did Ethereum—the chains themselves had no issues.


The reality in practice:

Exchange interfaces crashed, blockchain explorers went down, wallet interfaces failed, analytics platforms crashed, and trading screens showed 500 errors.


Users couldn’t access the “decentralized” blockchain they supposedly “owned.” The protocol itself ran fine—if you could “reach” it.


The following statements may sound harsh to many…


SovereignAI COO David Schwed was blunt:


“With Cloudflare’s outage today and AWS’s outage weeks ago, it’s clear: we can’t simply outsource infrastructure ‘resilience’ to a single vendor. If your organization needs 24/7 uptime, you must build infrastructure assuming outages will happen. If your business continuity plan is just ‘wait for the vendor to restore service,’ that’s pure negligence.”


“Pure negligence”—not an accident, not an oversight, but negligence.


Jameson Lopp’s comment was spot on:


“We have excellent decentralized technology, but by concentrating most services in the hands of a few vendors, we’ve made it extremely fragile.”


What Ben Schiller said during the last AWS outage is just as true now:


“If your blockchain goes down because AWS is down, it’s not decentralized enough.”


Replace “AWS” with “Cloudflare,” and the essence of the problem is exactly the same—the industry has never learned its lesson.


Why Choose “Convenience” Over “Principle”?


Building your own infrastructure means: buying expensive hardware, ensuring stable power, maintaining dedicated bandwidth, hiring security experts, achieving geographic redundancy, setting up disaster recovery, and 24/7 monitoring—each requiring massive resources.


Using Cloudflare only requires: clicking a button, entering a credit card, and deploying in minutes.


DDoS protection is someone else’s job, availability is someone else’s guarantee, scaling is someone else’s headache.


Startups chase “speed to market,” VCs demand “capital efficiency”—everyone chooses “convenience” over “resilience.”


Until the moment “convenience” is no longer convenient.


October’s AWS outage sparked endless Twitter debates about “decentralization.”


November’s Cloudflare outage? Silence.


Not out of “philosophical reflection,” nor “quiet contemplation.”


But because: people wanted to complain, only to find their usual platform (Twitter) was also down due to the infrastructure outage.


When your “single point of failure” is the very platform you use to mock “single points of failure,” you have nowhere to vent.


When the access layer depends on infrastructure from three companies, two of which suffered outages in the same month, “protocol-level decentralization” is meaningless.


If users can’t reach the blockchain, what exactly is our “decentralization” decentralizing?


The Monopoly Dilemma: Three Companies Control 60% of the Cloud Market—Where Does Crypto Go from Here?


AWS controls about 30% of the global cloud infrastructure market, Microsoft Azure 20%, and Google Cloud 13%.


Three companies control over 60% of the cloud infrastructure that supports the modern internet.


The crypto industry, which was supposed to be the solution to “centralization,” now relies on the world’s most centralized infrastructure.


Crypto’s “centralization dependency list”


  • Coinbase — relies on AWS;
  • Binance, BitMEX, Huobi, Crypto.com — all rely on AWS;
  • Kraken, though built on AWS, was still hit by Cloudflare’s CDN (content delivery network) outage.


Many so-called “decentralized” exchanges actually run on centralized infrastructure.


There’s another key difference between the October and November outages:


During the AWS outage, X (formerly Twitter) still worked, so crypto Twitter users could mock “infrastructure fragility.”


But during the Cloudflare outage, X also went down.


When the platform you use to “mock single points of failure” is itself part of the “single point of failure,” you can’t laugh at all.


This irony stalled the industry discussion before it even began.


Three major outages in 30 days—regulators are now paying close attention.


Core Issues Regulators Must Address


  • Do these companies qualify as “systemically important institutions”?
  • Should internet backbone services be regulated as “public utilities”?
  • What risks arise when “too big to fail” meets tech infrastructure?
  • If Cloudflare controls 20% of global internet traffic, is this a monopoly issue?


Corinne Cath-Speth of Article 19 was blunt during the last AWS outage: “When a single vendor collapses, critical services go offline—media becomes inaccessible, secure messaging apps like Signal stop working, and the infrastructure underpinning digital society falls apart. We urgently need cloud diversification.”


In other words: governments are waking up to the fact that just a few companies can bring the internet to a halt.


In fact, decentralized alternatives have long existed—no one wants to use them.


For example, Arweave for storage, IPFS for distributed file transfer, Akash for computing, Filecoin for decentralized hosting.


Why Are Decentralized Solutions “Praised but Not Adopted”?


Performance lags behind centralized solutions, and users can directly feel the latency.


Adoption is extremely low, and compared to the convenience of “clicking ‘deploy to AWS,’” decentralized solutions are cumbersome and complex for users.


Costs are often higher than renting infrastructure from the “big three” (AWS, Azure, Google Cloud).


The reality is:


Building truly decentralized infrastructure is extremely difficult—far harder than imagined.


Most projects only pay lip service to “decentralization” and rarely implement it. Choosing centralized solutions is always the simpler, cheaper option—until four outages in 18 months make people realize the huge hidden cost behind “simple and cheap.”


OORT CEO Dr. Max Li pointed out the industry’s hypocrisy in a recent CoinDesk column:


“For an industry that prides itself on ‘decentralization’ and constantly touts its advantages, to rely so heavily on fragile centralized cloud platforms for its infrastructure is itself hypocrisy.”


His solution: adopt a hybrid cloud strategy, having exchanges distribute key systems across decentralized networks.


Centralized cloud platforms have irreplaceable advantages in performance and scale—but when billions of dollars are at stake and every second of trading matters, their resilience is far inferior to distributed solutions.


Only when the cost of “convenience” becomes high enough to change industry behavior will “principle” triumph over “convenience.”


Clearly, the November 18 outage wasn’t severe enough, nor was the October 20 AWS outage, nor the July 2024 CrowdStrike outage.


How bad does it have to get before “decentralized infrastructure” becomes a requirement rather than a talking point?


On November 18, the crypto industry did not “fail”—the blockchain itself ran perfectly.


The real “failure” was the industry’s collective self-deception: believing you can build “unstoppable applications” on “stoppable infrastructure”; believing “censorship resistance” means anything when three companies control the “access channel”; believing “decentralization” is real when a single Cloudflare configuration file determines whether millions can trade.


If the blockchain keeps producing blocks but no one can submit transactions, is it really “online”?


The industry has no contingency plan.


When something breaks, all you can do is wait for Cloudflare to fix it, wait for AWS to restore service, wait for Azure to deploy a patch.


This is the industry’s current “disaster recovery strategy.”


Imagine: what if digital identity and blockchain become deeply intertwined?


The US Treasury is pushing to embed identity credentials into smart contracts, requiring every DeFi interaction to pass KYC checks.


When the next infrastructure outage happens, users will lose not just trading access—but also the ability to “prove their identity” in the financial system.


A three-hour outage could become three hours of “unable to load the human verification interface”—simply because the verification service runs on failed infrastructure.


The “safety rails” regulators want to build assume “infrastructure is always online.” But the November 18 outage proved that assumption is false.


When “over-surveillance” becomes obvious, tech workers turn to “privacy protection.”


Maybe now it’s time to include “infrastructure resilience” in that category.


It shouldn’t be an “optional bonus”—it should be a “foundational requirement.” Without it, nothing else matters.


The next outage is already brewing—it could come from AWS, Azure, Google Cloud, or another Cloudflare incident.


It could be next month, or next week. The infrastructure hasn’t changed, the dependencies haven’t changed, and the industry incentives haven’t changed.


Choosing centralized solutions is still the cheaper, faster, more convenient option—until it isn’t.


When Cloudflare’s next routine configuration change triggers a hidden bug in a critical service, we’ll see the familiar “script” again: endless 500 error pages, trading halted everywhere, blockchains running but inaccessible, people wanting to tweet about “decentralization” only to find Twitter is down, companies promising to “do better next time” but never delivering.


Nothing will change, because “convenience” always trumps “risk mitigation”—until the cost of “convenience” becomes too big to ignore.


This time, the “gatekeeper” was down for three and a half hours.


Next time, the outage may last longer; next time, it may hit during a market crash “when every second of trading is life or death”; next time, identity verification systems may also be caught up in the outage.


When the infrastructure you depend on collapses at the moment you can least afford it, whose fault is it?


Data sources: The Guardian, Johnny Popov, PC Magazine, IT Professionals, CNBC, Cloudflare, TechCrunch, Associated Press, CoinDesk, Tom’s Hardware, Dane Knecht, Tom’s Guide, Surya, Sheep Esports, TheBlock, Kraken, BitMEX, Ledger, Blockchain News, Statista, Sihou Computer, Jameson Lopp, Ben Schiller, Article 19, CoinTelegraph
0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Earn new token airdrops
Lock your assets and earn 10%+ APR
Lock now!

You may also like

"Crypto bull" Tom Lee: The crypto market correction may be nearing its end, and bitcoin is becoming a leading indicator for the US stock market.

"Crypto bull" Tom Lee stated that on October 10, an abnormality in the cryptocurrency market triggered automatic liquidations, resulting in 2 million accounts being liquidated. After market makers suffered heavy losses, they reduced their balance sheets, leading to a vicious cycle of liquidity drying up.

ForesightNews2025/11/21 15:54
"Crypto bull" Tom Lee: The crypto market correction may be nearing its end, and bitcoin is becoming a leading indicator for the US stock market.

Besant unexpectedly appears at a "Bitcoin-themed bar," crypto community "pleasantly surprised": This is the signal

U.S. Treasury Secretary Janet Yellen made a surprise appearance at a bitcoin-themed bar in Washington, an act regarded by the cryptocurrency community as a clear signal of support from the federal government.

ForesightNews2025/11/21 15:52
Besant unexpectedly appears at a "Bitcoin-themed bar," crypto community "pleasantly surprised": This is the signal

Solana founder shares eight years of behind-the-scenes stories: How he recovered from a 97% crash

What doesn’t kill it makes it legendary: How Solana was reborn from the ashes of FTX and is now attempting to take over global finance.

BlockBeats2025/11/21 15:14
Solana founder shares eight years of behind-the-scenes stories: How he recovered from a 97% crash

What’s next for the strongest altcoin of this round, ZEC?

There is a fierce debate between bullish and bearish views on ZEC.

BlockBeats2025/11/21 15:14
What’s next for the strongest altcoin of this round, ZEC?