Cloudflare Misconfiguration Cascades Into Six-Hour Global Outage Affecting Thousands of Services

What happened
On November 18, 2025, a single database permissions change at Cloudflare caused a Bot Management feature file to double in size. When the oversized file propagated to proxy servers worldwide, it exceeded memory limits and triggered cascading crashes across Cloudflare's core network. DNS resolution, CDN delivery, Workers KV, WAF, and the Cloudflare Dashboard all failed simultaneously. X, ChatGPT, Spotify, Shopify, Canva, and Anthropic's Claude were among the thousands of services unreachable for up to six hours, affecting an estimated 2.4 billion users.[1]
What went wrong
A database permission change at 11:05 UTC caused a query to return duplicate rows, doubling the size of a configuration file consumed by every proxy server on Cloudflare's global network. The file size exceeded preset memory limits on the proxy processes, causing them to crash. No automated gate existed to reject an oversized configuration file before global propagation. The incident was exacerbated by the fact that Cloudflare's own Dashboard was taken down by the same outage, hindering the incident response itself.[1]
Lesson learned
Configuration files distributed to every node in a global network are a single point of failure. Size and schema validation must gate propagation before deployment — and operational tooling must run on an independent control plane that cannot be taken down by the service it manages.
Sources
- [1]
External links can go dark — pages move, paywalls appear, domains expire. Every source above includes a Wayback Machine snapshot link as a fallback. All citations are best-effort research; if a source contradicts our summary, the primary source takes precedence.