AlphaGo is a triumph for humanity

... and not something to be afraid of.

As anyone who hasn't been hiding under a rock knows, Google Deepmind's AlphaGo program decisively won its third game in a row against grandmaster Lee Sedol.

First of all, I argue that we shouldn't find this surprising:  We're still riding the exponential wave of the growth of computing power in hardware, and when that's coupled in significant software advances such as deep neural networks, we get great things.  Go, despite its massive positional complexity, is still the kind of thing that computers excel at:  It has a precisely defined objective and rules, it admits a fairly compact representation, and exists entirely within the world of bits.

Second, I argue that this is an excellent excuse for all of humanity to pat itself on the back.  Consider what went in to the AlphaGo victory:

The Nature paper version of AlphaGo is noted to have used 1206 CPUs and 176 GPUs.  The details are vague, but for our purposes don't matter.  Let's start with a few possibly-unfounded assumptions to get the ball rolling:  Deepmind's Github has a lot of torch-related utilities, so let's assume they use Nvidia with the CuDNN bindings in the same way that everyone else does.  The CPUs are probably Intel's, because Intel.  As part of Alphabet, Deepmind gets to take advantage of Google's astounding compute infrastructure.  And remember:  That was just the hardware used to play the game.  In my experience, the resources used during development, experimentation, and training probably dwarf the runtime needs for a single game.

So, putting that together, we have Google's 57,000+ employees.  In case it slipped your mind, that's roughly the most valuable company in the world pitting a large amount of resources (Wikipedia says the Deepmind acquisition was north of $400m) against a single Go player.  Google has spent tens of billions of dollars on its compute infrastructure.  That computation is running on CPUs made by the hundred thousand employees of the most advanced semiconductor manufacturer in the world, and on GPUs made by the nine thousand employees of $17B market cap GPU behemoth NVidia.  Nvidia's numbers are smaller because, unlike Intel, they don't manufacture their own chips -- they use Taiwanese giant TSMC's fabs, adding in another $20b of yearly revenue and 37,000 employees.  And, of course, they all have to stuff memory in those computers and GPUs, so let's pull in Korea's Samsung ($305B revenue and nearly half a million employees, though certainly not all work on computer-related things!) and Hynix (another 17k employees).  And a vast array of electronics component and motherboard manufacturers, in Taiwan, mainland China, and elsewhere.

And we're only scratching the surface - those manufacturers by some of the most technologically complex lithographic and manufacturing equipment from an array of vendors such as Germany's Siemens AG, and so on down the stack of turtles, down to advances in semiconductor physics, material science, chemistry, quantum mechanics -- you name it.

And that's only the hardware!  It took the work of millions of people, in dozens of countries, directly and indirectly, just in the direct chain of manufacturing the hardware that managed this feat.

It's quite a thing to be able to manufacture a computer that can beat a human in Go -- and that higher-level process of creativity, flexibility, research, discovery, invention, and cooperation -- is the thing that should really remind us that it's going to be a long time before we bemoan people taking second place to the machines.

Go humans.


Popular posts from this blog

Reflecting on CS Graduate Admissions

Minting Money with Monero ... and CPU vector intrinsics

Finding Bugs in TensorFlow with LibFuzzer