You’re probably very familiar with IP addresses, the unique numbers used to route information to and from your Internet-connected computer. You’ll have seen addresses like this:
198.51.100.42
You might also have seen addresses in this format:
2001:db8:9b40:204f:1c56:d4eb:2848:5c5e
Those are IPv6 addresses, as opposed to the familiar IPv4 addresses. IPv6 is a newer standard, still in the process of being rolled out in many countries. Why the new standard? Because unfortunately, we ran out of IPv4 addresses in 2011.
The idea that we ran out of Internet addresses might seem ridiculous. You’re probably wondering why the designers of the Internet didn’t allow for more addresses. To really understand the answer requires going back into early Internet history.
The ARPANET and NCP
The Internet grew out of the ARPA Computer Network, also known as ARPANET, an experimental network started in 1966 by the Defense Advanced Research Projects Agency (DARPA).
At that time, computers were large. The cheapest computer available was the DEC PDP-8, which was the size of a refrigerator and a bargain at just $18,500 — equivalent to around $190,000 in 2026 dollars.

A powerful scientific computer would be a mainframe system like an IBM System/360. The cheapest System/360 started at $133,000 or more, which translates to around $1.4 million in 2026 dollars. There were “time-sharing” systems for large mainframes that would allow multiple people to share the resources of a single large computer by using multiple terminals or teletypes.

Still, many organizations that wanted to use computers couldn’t afford them, or couldn’t afford one that was powerful enough for the work they wanted to do. DARPA technicians set out to solve this problem by building a network that would allow research organizations to share the use of their computers. The network would connect a small number of large computers, each with many users. Each computer would run multiple available services — file transfer, interactive text editing, batch processing of data, email, and so on.
The network protocol became known as NCP, Network Control Protocol. As described in RFC 33 in 1970, it used an 8-bit number (from 0 to 255) to specify which computer to connect to. Another 8-bit number specified the port (service) being connected to. Finally, a 24-bit number specified the user making the connection, allowing for around 16.8 million users. This seems like a strange system now, but at the time it made sense. There weren’t expected to be many computers, but eventually tens of thousands of researchers might want to use them, and like Social Security Numbers you wouldn’t want to reuse user numbers.
All of the computers on the network were given 8 character names, and files were distributed listing each computer’s address from 0 to 256, along with its hostname. You can see an example in RFC 229 — UCLA had two computers at addresses 1 and 65, CMU was at address 14, McClellan air force base had the address 22, and so on.

If your organization wanted to put a computer on the network, you would write to Jon Postel. He would assign your computer a unique address from 0 to 255, and include it the next time he distributed a list of the computers on the network.
The switch to TCP/IP
By the mid 1970s the computer industry was starting to change rapidly, as integrated circuits were introduced and it became possible to put an entire CPU on a few integrated circuits (ICs), or even just one. DEC started using the technology for their popular PDP series computers, and soon you could buy a computer for thousands of dollars, rather than tens of thousands. It became clear that eventually every organization would be able to have a computer.
Clearly the NCP protocol wasn’t going to work for this new world, so work began on a new system that became known as Internet Protocol (IP). The plan was to connect multiple existing networks, including ARPANET, via internetwork links. The overall network made up of all the joined-together networks would be called the Internet.
The first draft of the new standard was RFC 675, Specification of Internet Transmission Control Program, published in 1974. It used 4 bits for the destination network, and another 16 bits for the destination system on that network, allowing for 16 networks and up to 65,536 computers per network. It was the first document to use the term “internet”, shorthand for “internetwork”. This greatly expanded the number of computers that the system would be able to support, but was still built on the assumption that only a small number of organizations would be joining their networks together.
In 1978 an incomplete draft of Internetwork Protocol Specification Version 2 was published as IEN 28. This would have allowed network addresses to be of variable length, up to 16 bytes. A competing Specification of Internetwork Transmission Control Program (TCP) Version 3 was also published around the same time, as IEN 21. It also used a variable address length.

However, in 1978 researchers decided the whole approach of versions 2 and 3 was wrong. The standard was becoming overly complex because it tried to handle everything from simply sending bytes across the network to handling complex connections to services. Instead, it was decided to split the standard into two pieces. The Internet Protocol (IP) would describe how raw packets of data traveled across the network, and the Transmission Control Protocol (TCP) would build on that to support connections to services.
June 1978 saw the publication of IEN 40, Transmission Control Protocol Version 4. It noted:
All addressing information (including port identification) has been eliminated and is expected to be carried in the internet protocol.
Document IEN 44, published alongside it, described the revised IP data format. The header now had fixed address widths — 8 bits for the network, and 24 for the address on that network.
In 1979, IEN 81 was published. This also claimed to be Transmission Control Protocol Version 4. Two more drafts followed, then in 1980 the final versions of IP version 4 and TCP version 4 were published as RFC 760 and RFC 761. In 1981 RFC 791 replaced RFC760, but the details of network addresses weren’t changed.
In November 1981, RFC 801 described a plan for how the entire ARPANET would switch from NCP to TCP/IP (which was then called IP/TCP). The switchover was officially completed on January 1, 1983.
TCP/IP version 4
The final IPv4 specification in RFC 760 dropped the separate network identifier that had been present in earlier documents, and specified that the source and destination addresses would simply be 32-bit numbers (see section 3.1).
For convenience, these would be divided up into four 8-bit numbers, written in decimal, separated by dots. Hence the familiar IP address format:
192.0.2.17
However, you can still write IPv4 addresses as plain 32-bit numbers. For example:
ping 2130706433
PING 2130706433 (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.070 ms
Theoretically, 32-bit addresses would give us over 4 billion possible IP addresses. However, blocks of addresses are reserved for special uses. For example, all the addresses starting with 127. loop back to the same machine, and addresses starting with 224. are used for broadcasting to multiple machines on the same network.
With the reserved address space removed, there are just over 3.7 billion public IPv4 addresses. That’s still a lot, but there was also quite a bit of waste — organizations would be allocated blocks of addresses, and it would then be up to them how to use them. For example, the Ford Motor Company was given a block of 16.7 million IP addresses. I don’t know how many computers Ford had on the Internet in the 1980s, but I’d put money on it being a lot less than 16 million.
An unexpected problem surfaces
For a few years, TCP/IP version 4 was a great success. Before long every major US university was on the Internet, along with many big technology companies and military facilities.
However, in 1989 the first commercial Internet Service Provider started. It turned out that some ordinary people might want to connect their personal computers to the Internet. By the 1990s millions of people were doing so, and it became clear that 3.7 billion addresses wasn’t going to be enough.
So now back to the key question: why was the IPv4 address space limited to 32 bits, particularly since the earlier proposals allowed for much longer addresses? To understand the decision, you need to look at what the world was like in 1980.
The computer world in 1980
In 1980 the most powerful computer in the world was the $7.9 million Cray-1S, which used 24-bit addresses to handle up to 32MB of RAM.

IBM’s most powerful mainframe, the 3081, also used 24-bit addresses and handled up to 32MB of RAM.

The routers that connected systems to the ARPANET were known as Interface Message Processors or IMPs. They were based on a 16-bit Honeywell computer design from 1969, and the CPU registers could only handle 16 bits at once. This meant that just reading a 32-bit address required multiple instructions. Processing addresses longer than 32 bits would have slowed down handling of network data, particularly if the addresses had been of varying length like in the early IP drafts.

It’s also worth remembering what didn’t exist in 1980.
First of all, there was no standardized commercial Ethernet. Offices generally didn’t have local networks, people typically shared data by passing floppy disks around.

Even when businesses started to use office LANs in the mid 1980s, it wasn’t clear that TCP/IP would become the standard. IBM had a system called Token Ring, Novell had a protocol called IPX/SPX, Microsoft and 3Com had LAN Manager, and Apple had AppleTalk. Windows didn’t even ship with TCP/IP support built in until Windows NT in 1993.
There were no mobile phones. The first would be the brick-sized DynaTAC, released in 1983. It was entirely analog, so it couldn’t have connected to the Internet anyway. Digital mobile phones didn’t appear until 1991. The idea of a handheld computer connected to a wireless network was science fiction alien technology, like in the Hitchhiker’s Guide to the Galaxy.

The closest thing to email that most businesses had was telex, which ran over regular telephone wires at a speed of about 10 characters per second. Businesses such as banks that needed to transmit data quickly used X.25, run by the telecom companies.

The most popular home computers were the Commodore PET, Radio Shack TRS-80, and Apple II. They were all 8-bit computers, using 16-bit addresses, and limited to 64KiB of memory. A total of just under 1 million personal computers had been sold.
As far as dial-up services go, CompuServe was brand new in 1980. A service called The Source had existed for about a year, for those who could afford to pay $200 for a modem, $100 for a subscription, and $2.75 per hour while using the service. In 2026 dollars that’s nearly $1,000 for the modem, $390 to join, and about $10 an hour for the service, so as you can imagine very few computer users signed up. Also, The Source wasn’t connected to the Internet, and nor were CompuServe, AOL or Prodigy when they eventually appeared — there was no dial-up Internet until 1989.

The conclusion
So to go back to the question of why IPv4 addresses didn’t have more bits: when TCP/IP version 4 was designed, less than half of one percent of people in the US had personal computers. None of those computers were on networks, hardly anyone had a modem, there were no mobile digital devices, and even when businesses started deploying office networks a few years later those weren’t TCP/IP!
Given that context, I think it’s understandable that the designers of TCP/IP didn’t foresee that we would each want to connect a phone, a computer, and often a TV to what was at the time an experimental research network. They can be similarly excused for not guessing that one day we would connect our fridges, washing machines, wrist watches and light bulbs to their network as well.
In addition, making IPv4 addresses longer would have slowed down processing by the routers used at the time, and making the addresses variable length would have slowed things even more. (In fact, one of the benefits of IPv6 is that it has fixed size headers.) Like so many decisions we come to regret in the world of computers, the limited address space of IPv4 can be viewed as a performance tradeoff.
At the start of this article I mentioned that we ran out of IPv4 addresses in 2011. That being the case, you might wonder how the Internet continues to function. The answer is that starting in the 1990s, a number of short-term workarounds were implemented to delay the inevitable. One that you might be familiar with is Network Address Translation or NAT, the thing that sometimes ruins your online gaming. The large blocks of addresses were also broken up, and companies were encouraged to return address space that they weren’t using.
The real solution, however, is IPv6. Every mainstream operating system now supports it by default — Linux, macOS and Windows, iOS and Android. If your ISP supports it, you have it right now, even if you don’t know it. In fact, even if your ISP doesn’t support IPv6 you still have it on your home network, but that’s another story…
Image credits
- Apple II: Nicolas Foster.
- Cray-1: NASA Ames Research Center, 1981.
- DynaTAC: mikek via VisualHunt.
- IBM 3081: Norsk Teknisk Museum, CC BY-SA 3.0, via Wikimedia Commons.
- IMP: Andrew Adams, CC BY-SA 2.0.
- Telex: ajmexico on Flickr, CC BY 2.0.
- Terminal: Jason Scott on Flickr, CC BY 2.0.