Back when I was 17, I thought C was the greatest programming language in the world.
I had started programming in BASIC, messed with assembly language, then discovered Pascal. Both BASIC and Pascal were interpreted, at least in the implementations I had, so C was my first compiled language. Being able to produce machine code without writing assembler was a revelation. Sure, C had rough edges here and there — the type declarations often needed careful thought to decode — but it did the job like nothing else.
A year or so later I discovered Modula-2, again compiled. I wasn’t a big fan of the Pascal-style BEGIN…END blocks, having become used to the brevity of C’s curly brackets, but Modula-2’s import mechanism and limited multithreading made development less painful.
Still, I couldn’t help think that C and Modula-2 had some obvious things missing. The one that I found strangest was the inability to return multiple values from a function. If the compiler could push multiple arguments on the stack before calling the function code, why couldn’t it pop multiple results off the stack afterwards? Another thing I found annoying was manual memory management, particularly manually sizing and resizing arrays.
Later at university, I became interested in dataflow programming languages, experimental graphical programming environments where you wired together data channels between blocks representing computation units. The idea seemed sound, particularly with talk of massively parallel computers on the horizon, but I couldn’t help thinking that textual code was still the way to go.
I found myself imagining my ideal programming language. It would be superficially like C, but with the ability to return multiple values from functions. It would handle imports and libraries like Modula-2. It would allow parallelism, with communication by message passing, defining channels in and out of functions and wiring them together. And of course, it would support arbitrary size arrays.
Then I fell in love with Lisp, learned object oriented programming, graduated, got a job, and forgot the whole thing.
Fast forward to 2012. Google launched version 1.0 of a new programming language called Go. I ignored it for a while — you can’t pay attention to every new programming language — but after it failed to go away for a couple of years I decided to take a look at it. I soon noticed how much it seemed to resemble the imagined ideal programming language of my teenage years.
Programming language snobs have been scathing about Go, calling it a joke of a language. The funny thing about jokes, though, is that they often seem to win. Unix started out as a joke, and was mocked well into the 1990s — and now everyone carries a Unix computer in their pockets everywhere. Linux was a joke too, a crudely hacked-together clone of Unix with fundamentally the wrong architecture. The web was a joke — links that broke all the time, incompatibilities everywhere, and don’t even mention JavaScript. Plenty of people tried to build a non-Unix OS in C++, Ada or some other “real” programming language, and they all failed, often before getting to market.
I messed with Go, hacked together some simple programs, and decided it was something worth investigating further. The documentation on the web was definitely lacking, though — it was time to work my way through a good book.
Next: The book.