infrastructure
The Root Servers
The 13 logical root servers that anchor the entire Domain Name System
Thirteen letters, one system
Every DNS resolution that reaches beyond a resolver’s cache begins at the root. The DNS root server system consists of 13 logical root servers, identified by the letters A through M, operated by 12 independent organizations. These 13 addresses are hardcoded into every recursive resolver on earth — a file called the “root hints” that ships with DNS software and rarely changes.
Why 13? The constraint is historical. The original DNS specification (RFC 1035) limited UDP responses to 512 bytes. A priming response listing all root server names and their IPv4 addresses needed to fit within that limit. Thirteen was the maximum. When IPv6 addresses were later added for each root, the responses grew beyond 512 bytes, but EDNS had arrived by then to handle larger payloads.
The number 13 is misleading, though. Behind those 13 IP addresses sit approximately 1,959 physical server instances distributed across all six populated continents using anycast routing.
The root server operators
| Letter | Operator | Instances |
|---|---|---|
| A | Verisign, Inc. | 59 |
| B | USC Information Sciences Institute | 6 |
| C | Cogent Communications | 13 |
| D | University of Maryland | 231 |
| E | NASA (Office of the CIO) | 328 |
| F | Internet Systems Consortium (ISC) | 354 |
| G | Defense Information Systems Agency (DISA) | 6 |
| H | U.S. Army DEVCOM Army Research Lab | 12 |
| I | Netnod (Sweden) | 89 |
| J | Verisign, Inc. | 150 |
| K | RIPE NCC (Netherlands) | 149 |
| L | ICANN | 143 |
| M | WIDE Project (Japan) | 28 |
The distribution is uneven by design. F-root (ISC) operates 354 instances while B-root and G-root each operate only 6. The operators with the most instances have deployed heavily across developing regions where DNS infrastructure was historically sparse. E-root (NASA) and D-root (University of Maryland) have particularly large footprints because they participate in aggressive anycast deployment programs.
Two operators — Verisign — run two root servers each (A-root and J-root). Every other operator runs exactly one.
Anycast: the invisible multiplier
Anycast routing is what transforms 13 IP addresses into a global network of nearly 2,000 servers. In anycast, the same IP address is announced from multiple locations simultaneously. When a resolver sends a query to 198.41.0.4 (A-root), the internet’s routing system delivers that packet to the nearest A-root instance based on BGP path selection — typically the one with the fewest network hops.
This means a resolver in Tokyo reaches a different physical server than a resolver in London, even though both send packets to the same IP address. The effect is dramatic:
- Reduced latency. Queries reach a nearby instance instead of crossing oceans.
- DDoS resilience. Attack traffic is distributed across all instances rather than hitting a single server.
- Automatic failover. If an instance goes down, its BGP announcement is withdrawn and traffic routes to the next-nearest instance.
Before anycast, root servers were unicast — a single machine in a single location. The transition began in 2002 and was largely complete by 2006.
What the root servers actually do
Root servers do very little, and that is the point. They do not know the IP address of www.example.com. They do not resolve queries end-to-end. They answer exactly one type of question: “Who is responsible for this top-level domain?”
When a recursive resolver has nothing in its cache and needs to resolve a name, it sends a query to a root server. The root server responds with a referral — a list of the authoritative nameservers for the relevant TLD. For a .com query, the root returns the addresses of the .com TLD servers. The resolver then contacts those TLD servers directly.
This referral-based design means root servers handle a fraction of the internet’s total DNS traffic. Recursive resolvers aggressively cache root zone data, and the root zone’s SOA record carries a relatively long TTL. Once a resolver knows where to find the .com TLD servers, it does not ask the root again until that cached record expires.
Root server query volume
Despite serving the foundation of the entire DNS hierarchy, root servers handle surprisingly little traffic relative to the internet’s scale:
| Metric | Value |
|---|---|
| Root server queries per day | ~130 billion |
| Estimated global DNS queries per day | ~500 trillion |
| Root traffic as share of total DNS | ~0.026% |
Root query volume grew approximately 40% between early 2023 (~90 billion/day) and early 2025 (~130 billion/day), driven by increased internet usage, IoT growth, and more devices making DNS queries.
Much of the traffic that does reach root servers consists of queries for non-existent TLDs — typos, malware callbacks to random domains, and misconfigured software. These “junk” queries cannot be cached as referrals since they result in NXDOMAIN responses, contributing disproportionately to root server load.
The root zone file
The root zone itself is remarkably small — under 2 MB. It contains NS records and glue records for every TLD in existence (approximately 1,593 as of 2026). IANA maintains the root zone, Verisign generates and distributes it, and all 13 root server operators load it.
Changes to the root zone — adding a new TLD, changing a TLD’s nameservers — follow a formal process through ICANN and IANA. The root zone is signed with DNSSEC, and the root zone signing ceremony (where the root KSK is used) is a carefully choreographed event with multiple trusted community representatives present.
Resilience by design
The root server system has never suffered a complete outage. Even during significant DDoS attacks — including a major assault on root servers in November 2015 that generated 5 million queries per second per server — most root operators continued serving legitimate queries. The system’s resilience comes from three factors:
- Anycast distribution. Attack traffic is absorbed across hundreds of instances worldwide rather than concentrated on a single target.
- Operator diversity. The 12 organizations operate independently, with different hardware, software, networks, and operational practices. A vulnerability in one operator’s infrastructure does not affect the others.
- Caching. Even if all root servers were unreachable for hours, most DNS resolution would continue normally because resolvers cache root zone data. Only queries for TLDs not already cached would fail.
The root server system was deliberately designed with no single point of failure, no single organization in control, and enough redundancy to survive coordinated attacks. Four decades later, that design continues to hold.