Did you know that the desktop or laptop computer you use is almost certainly based on the design of a dumb terminal? Let me tell you the story…
But first, a disclaimer: As with everything else on my web site, this represents my opinion, not IBM’s. I’ve done my best to be accurate, but inevitably I’ve had to skate over details here and there. You’re encouraged to follow the links and read more!
So, once upon a time, over 40 years ago, computers filled rooms, and everyone used dumb terminals for their computer access. These terminals were expensive to manufacture, being built from discrete components. They had complicated decoding logic which had gradually grown over the years, as the computers they were connected to had gained more features.
In those days, the basic unit of computation wasn’t necessarily an 8 bit byte. The popular PDP-8 used 12-bit words, with 7-bit ASCII characters that always had the 8th bit set to 1. The IBM System/360 used 8 bit bytes, but in the EBCDIC character set. Older IBM systems used 6-bit BCD coding, based on punched cards. The CDC Cyber range of mainframes used 12-bit bytes and 60-bit words, but 6-bit characters in CDC’s own display code. And each manufacturer had its own set of control codes for moving the cursor around the screen.
Because of all this variation, the Computer Terminal Corporation (CTC) decided to build a universal terminal which would have its own internal instruction set. You would then be able to load in a tape which would map from your choice of proprietary terminal control codes to the terminal’s internal set. In other words, it was going to be a hardware computer designed for running terminal emulators. It was to be called the Datapoint 2200.
This programmable terminal was designed to have a processor built from individual TTL components — small ICs with typically a dozen or so transistors on them, at most a few hundred. The most popular TTL chips were (and still are) the Texas Instruments (TI) 7400 series. But because of the number of components needed, the Datapoint 2200 was going to be expensive to build, heavy, and would need fans to keep all the circuit boards cool. Previous CTC terminals had suffered from major overheating problems.
In the early 1970s integrated circuits based on Large Scale Integration (LSI) were becoming commercially available. These new chips had thousands of transistors on them, allowing an entire circuit board of components to be condensed into a single chip. So CTC approached a startup company that was designing LSI microchips, and asked them if they could replace the two circuit boards of the Datapoint 2200’s processor design with a single chip.
That startup company was called Intel. They initially mostly made memory chips, but in 1971 they launched the first CPU on a chip, the 4-bit Intel 4004, which they had built for a Japanese company that used it to build desktop calculators.
A year later, Intel finished taking the 100 or so chips for the 2200 terminal emulator’s processor and condensing them down to a single IC design known as the 1201. TI manufactured some samples. The new chip wasn’t any good; the instruction set was buggy. Intel were convinced they could fix the bugs, but CTC weren’t prepared to wait, and launched the 2200 using the full-size CPU circuit boards instead.
Nevertheless, Intel had the 4004’s designer work on the 1201. The bugs were fixed, and the result was delivered to CTC in 1972. But by that time, CTC had upgraded their terminal design to a new model with a built-in hard disk drive, and the Intel chip wasn’t powerful enough to handle that. CTC decided to cancel the contract with Intel and walk away from the deal.
Intel decided to take the fixed 1201, rename it the 8008, and see if they could get anyone to buy it. A team at California State University Sacramento did, and built a small microcomputer around it, complete with BASIC language interpreter. Before long, a couple of commercial 8-bit personal computers were on sale with the Intel 8008 inside.
Based on the feedback from 8008 users, Intel added some new instructions which provided a way to use two 8-bit registers together as if they were a 16-bit register. They also expanded the stack pointer and program counter to 16-bit, allowing up to a 64K of RAM. Though it seems comical now, it was a huge amount at the time; in those days, an IBM mainframe only had 8K or 16K of high-speed storage, and the PDP-11 that UNIX was developed on only had 24K of RAM.
The new expanded 8008 was launched as the Intel 8080. It was used for a number of early personal computers, particularly the Altair 8800 which was sold as a kit through ads in electronics hobbyist magazines.
However, by 1976, Intel had competition. Federico Faggin, the technical genius who had led the project to fix the Intel 1201, had left Intel to form his own company, Zilog. They launched a CPU called the Z-80, cunningly designed to be backward compatible with Intel 8080 code. It had more instructions and registers than the 8080, to make programming easier; it even had two separate sets of registers you could swap between. It required fewer support components than Intel’s chip, and became wildly popular. The Z-80 was used in the Tandy TRS-80, launched in 1977, the first popular pre-built personal computer to be available in malls across America. (The Z-80 also later became part of the Sinclair ZX80, ZX81 and Spectrum computers.)
Meanwhile, Motorola had designed a minimalist 8-bit processor called the 6800, targeted at computer peripherals, test equipment, and terminals. A Motorola engineer named Chuck Peddle left the company and helped found MOS Technology; there, he designed an enhanced chip compatible with the 6800 instruction set. It was a radical, minimalist design for a general purpose computer, having a tiny instruction set and a mere 3 registers (compared to the 7 available on the 8080 and the two sets of 8 general purpose registers found in the Z-80). The new MOS CPU was known as the 6502. Its lean design meant that it could be manufactured and sold for a fraction of the price of any Intel processor; it launched for just $25, while the 8080 had launched at $360. The 6502 became the basis of the Apple II, BBC Micro, Atari 2600, Nintendo NES, Commodore VIC-20, Commodore 64, and many other home computers of the late 70s and early 80s. It has since come to be viewed as perhaps the first RISC design.
Intel were forced to fight back. They started designing the Intel iAPX 432, which would be their first 32 bit computer, and would ensure their technological dominance during the 1980s. It would be programmed in the US DoD-approved language of the future — Ada. It would even have built-in support for object-oriented programming.
In the mean time, Intel needed a stopgap solution to claw back some market share. They launched the 8085, which was an 8080 that needed fewer support components — it could talk directly to RAM chips and only needed a single voltage power supply. It wasn’t a big hit, though; it was pricey, and was only ever used in a few models of computer. However, one of those computers was the IBM System/23 Datamaster.
IBM had noticed businesses starting to use Apple II and TRS-80 computers for word processing, spreadsheets, and other business tasks. The microcomputers were starting to make it harder to sell IBM mainframes. In response, the company developed a series of standalone desktop business computers. The IBM 5100 for scientists, the IBM 5110 for handling accounting and ledgers, and the IBM Displaywriter for word processing.
This strategy of single-purpose computers was of limited success. Small businesses would have a hard time justifying purchase of a $10,000 IBM 5110 and a $7,800 Displaywriter when they could buy a TRS-80 for $2,500 and use it for both word processing and accounting. IBM realized it needed something a little more general purpose, but at the same time it didn’t want to launch anything that might destroy sales of high-end workstations or mainframes. The Intel 8085 seemed like a good choice of CPU; it was basically the guts of a souped-up dumb terminal, and was already inferior to the Z-80. It became the guts of the System/23 Datamaster, which offered both word processing and accounting at a new low price of $9,830 (including printer).
The Datamaster was IBM’s cheapest business computer ever, but it still failed to compete with the early 1980s home computers which were being repurposed as business machines. IBM decided it needed to compete directly with the likes of the TRS-80 Model III and Apple II Plus. A team was assembled to design and build an IBM PC.
Meanwhile, Intel had been having trouble with the iAPX 432. As a temporary stopgap measure, they put together another 8000 series CPU aimed at stealing back business from Zilog. While it was a ground-up redesign, Intel made it back-compatible with source code for the 8008, 8080 or 8085, so if you needed to you could reassemble your assembly language source code for your 8008 and run it on the new chip, called the 8086. Unlike the 8085, though, it had a proper 16 bit mode, and new instructions designed to help with implementation of modern (for the time) programming languages like Pascal. The basic design of the 8086 was thrown together in about 3 months, and in production just over 2 years later.
By this time, Z-80 based CP/M had become the OS of choice for serious business computing. It was the first OS to provide a certain amount of cross-platform compatibility; you could run CP/M applications like Wordstar or dBase on a TRS-80, a MITS Altair, a DEC, an Apple II with a Z-80 card added to it, or even the revolutionary Osbourne 1 portable computer.
At IBM, the PC team considered using IBM’s new RISC CPU, the 801. However, it was considered a mainframe-class processor, and using it what was intended to be a cheap personal computer would have raised political and technical hurdles. Some of the team had worked on the Datamaster, so the Intel 8000-series CPUs were familiar to them. The 8086 was squarely aimed at the Z-80, which meant it ought to be easy to get CP/M ported to it. It was so slow that it couldn’t possibly compete with IBM’s more expensive business systems. Best of all, IBM had already licensed the option of making 8086 chips itself.
The 8086-based IBM PC was to launch in 1981. Negotiations to have CP/M ready at launch hadn’t gone well. IBM’s CEO at the time was John Opel, who was on the board of United Way of America with Mary Gates. Mary apparently mentioned that her son Bill wrote microcomputer software; at the time, he was selling Microsoft BASIC to hobbyists. IBM asked Microsoft if they could supply an operating system similar to CP/M but for an 8086-based computer.
Microsoft didn’t have anything like that. So Bill Gates said yes, and then he and Paul Allen shopped around and eventually bought all rights to an unauthorized CP/M clone called 86-DOS from Seattle Computer Products, for $50,000. 86-DOS was quickly renamed MS-DOS. It was sold on a per-copy basis to IBM and shipped with the IBM PC as PC-DOS, but Microsoft made sure the contract was non-exclusive so they could sell it to other companies as well, at $50 per copy.
In the meantime, Intel had continued with its stopgap attempts to compete with the Z-80, and had produced a cut-down version of the 8086, called the 8088. It had the same registers and address space, but moved all the data across an 8-bit bus like the Z-80, instead of a 16-bit bus. This meant performance was worse, but you needed even less in the way of support circuits. The IBM PC eventually shipped with the 8088 inside.
A couple of years later, Intel’s iAPX was finally ready. It was so complicated that it wouldn’t fit on a single chip using the technology available at the time, so it was shipped on two separate chips — one to fetch and decode the instructions, and a second one to execute them. It was substantially slower than the Motorola 68000, launched four years earlier, and PCs started to appear based on the 68000.
Intel threw out the iAPX design, and went back to the 8086. They added a new way of handling memory addresses, so that it could deal with up to 16MB of RAM; however, it also had an 8086-compatible mode so you could keep running your DOS programs. The new 80286 also had memory protection options, and doubled the number of instructions executed per clock cycle. It used twice as many transistors as the 8086, and could barely be squeezed on a single IC, but it was faster than the iAPX, and was used in the IBM Personal Computer/AT — the AT standing for Advanced Technology.
It was only advanced for a few months, though. Later in 1984 the Macintosh launched, closely followed in 1985 by the Amiga and Atari ST. All three used the Motorola 68000, and all three made the text-based IBM PC look like a dumb terminal. Throwing high resolution graphics around would require a lot more RAM, so Intel expanded the 8086 design again, to a full 32 bit design that could compete with the 68000 series. This was called the 80386.
Of course, Intel knew it couldn’t keep bolting more features on the 8086 forever, so in 1984 it started designing two new RISC CPUs, called the i860 and i960. This time, the i960 would be the do-everything high end design aimed at government and military applications — and still to be programmed in the language of the future, Ada — and the i860, which would be a more manageable general purpose RISC CPU for desktop computers.
In the mean time, Motorola had launched the 68030, a full 32 bit CPU with all the memory protection and other features of the 80386. It started to appear in UNIX workstations like the NeXT cube, as well as the Macintosh and Amiga. Intel needed to compete, so in 1989 they launched both the new i860, and the 80486.
The 80486 added high speed on-chip cache like the 68030. Also like the 68030, it had a dedicated on-chip floating point processor. It was quickly incorporated into PC designs, while the i860 was quietly ignored.
At this point, Intel realized that the DOS and Windows based PC industry wasn’t going to be willing to migrate to a whole new architecture. So they set out to build a completely new CPU from the ground up — but one that would still be able to run legacy 8086 software.
Meanwhile, AMD had produced a chip that was code-compatible with the 80386. Unfortunately for Intel, the AMD Am386 was also faster than the 80386, and almost as fast as the 80486 because of its use of instruction pipelining and internal cache. It ran hot compared to Intel’s processor, but customers just put more fans in their computers.
For the next decade or so, Intel and AMD engaged in a technological battle. The Pentium, Pentium Pro, Pentium II and Pentium III were released, all based on adding incremental improvements to the old 80486 design and building from there. So the 80486 became the P5 microarchitecture, which became the P6 microarchitecture of the Pentium Pro.
But Intel still wanted a new, clean chip design. So in the late 90s, they started working on a new ground-up chip design called NetBurst. They went crazy with the pipelining and cache. Also, because people tended to evaluate competing PCs based on their CPU speed in MHz or GHz, Intel designed the new Pentium 4 so that they would eventually be able to up the clock speed to 10GHz.
At the same time, IBM’s POWER architecture and Sun’s SPARC were carving up the high-end server and workstation market. Intel again felt it needed something to compete. i860 had failed, so another new RISC project was started. The result was called Itanium, a true 64-bit CPU. It would even support the language of the future — Ada. This time, Microsoft was on board, and would port Windows to run on Itanium. Everyone was agreed that it would utterly destroy POWER and SPARC.
Pentium 4 launched in 2000 with saturation advertising. It soon seemed as though the “bing bong” Pentium jingle was a mandatory part of every TV ad break. Of course, the chips were much more expensive than AMD’s offerings — after all, someone has to pay for all that advertising — but the strategy seemed to work. People didn’t even notice that the Pentium 4 needed to run at a 15% higher clock speed just to match the performance of the old P6 Pentium chips.
The Pentium 4 wasn’t really suitable for laptops, though, because of its high power requirements. So as a stopgap, Intel went back to the old P6 Pentium Pro era design, made a few improvements, and started selling it as the Pentium M.
Itanium launched in 2001. Hardly anyone noticed, and it quickly got dubbed ‘Itanic’. Over the next four years it quietly sank.
By 2003, AMD was making significant gains in the high performance x86 server market. Intel tried launching a Pentium 4 Extreme Edition at $999 per chip, but the next week AMD launched Athlon 64. It was a true 64 bit x86, but it could still run 32 bit x86 code. So it was more compatible than Itanium, and faster and more expandable than Pentium 4.
In 2004, Intel released a Pentium 4 with an implementation of AMD’s 64 bit x86 instructions. It was another humiliating stopgap measure, but worse was to come. The chip’s architecture wasn’t working out the way it had been planned — instead of the planned 10GHz, the fastest it was ever pushed was 3.8GHz. Beyond that, it had major thermal problems, chewing and dissipating so much power that it was tough to keep the chip from melting.
The Pentium 4 wasn’t going to cut it against the competition. So Intel threw away both their new architectures, Pentium 4/NetBurst and Itanium, and went back to the old P6 Pentium Pro design yet again. This time they took the old Pentium M core, stuck two of them on a single chip, added more cache, and called the result Core Duo. The ’new’ architecture was called Core, and it was a success — the personal computer industry was moving towards laptops, and at the same time people were starting to demand smooth multitasking and multimedia, listening to MP3s or watching videos while editing documents. Core Duo’s comparatively low power requirements and dual cores made for a great laptop experience. By 2006 they had extended the Core architecture to add AMD’s 64 bit instructions, and Core 2 was born. Core i5 and Core i7 followed, and here we are.
So in summary: The Intel 1201 was designed to operate a dumb terminal, but didn’t work. It fixed up and condensed onto a single chip as the 8008. The 8008 was hacked to support 16 bit, and became the 8080. The 8080 was condensed onto fewer chips as the 8085. The 8086 was a quick hack designed to be a new 16 bit chip, but compatible with the 8085. The 8086 was temporarily hacked to support more memory as the 80286, given bolted-on 32 bit support as the 80386, then upgraded to become the 80486 and Pentium series. The 80486 became a stopgap CPU for laptops, the Pentium M. That then became the CPU core at the center of the Core series, which had a clone of AMD’s 64 bit support bolted on as the Core 2.
So to summarize the summary, we’re still running CPUs based on the design of a 1971 dumb terminal, with more and more hacks bolted on. Intel have desperately tried to replace the mess multiple times, but every fundamentally new CPU architecture they’ve come up with has been sub-par in some major respect, and failed.
How long can this last? Well, there are signs that the era of the x86 might finally be ending, as the personal computer era ends. More and more people use mobile phones and tablets as their primary computing devices, and Intel has failed to get any traction in that market space. Instead, all the popular devices are based on ARM, a pure and clean RISC design by Acorn, inspired by the classic 6502. Recently, the most popular laptop on Amazon has been the ChromeBook — running on ARM. Devices are starting to appear powered by the 64 bit ARM design released in late 2011. At the high end, IBM’s POWER architecture has reached clock speeds of up to 5GHz, and massive supercomputing projects like Watson are built on it. Intel is in the middle, and being squeezed from both sides. RISC may yet have the last laugh.