Skip to main content

Google reveals the mysterious custom hardware that powers AlphaGo

Google reveals the mysterious custom hardware that powers AlphaGo

/

Meet TPU, a custom ASIC processor to power machine learning

Share this story

The machine learning community has coalesced around Google's TensorFlow library. Interestingly, one major holdout was DeepMind, which did most of its research on the Torch7 library. Then, late last month, DeepMind announced it was moving to TensorFlow as well — it was already using it for portions of AlphaGo.

Why does any of this matter? Well, with standardization comes the opportunity for optimization, and Google has gone wild with optimization in this case.

On its blog, Google just announced it has built a custom "TPU" (Tensor Processing Unit) ASIC chip for machine learning and it's specifically optimized for TensorFlow. How optimized? About an order of magnitude better performance per watt than the alternatives (Nvidia GPUs, presumably), which Google equates to leaping forward three generations of Moore's Law.

About an order of magnitude better

Almost any application can be sped up by having dedicated hardware, but you have to do something a lot of times to make it worth it. Video playback, for instance, is specially accelerated in almost all computers and phones, because people like watching video and mostly use a few standard codecs. This makes playback smoother and consumes less battery — it's better in every way, at the cost of flexibility. Obviously, Google does a lot of TensorFlow, and can now reap the benefits.

In fact, Google has been running TPUs in its data centers for over a year, and it's already used for optimizing search results and Street View. Oh, and it powered AlphaGo during its matches against Lee Sedol. Here's what those server racks look like:

That's Google's brain. Look closely. One day, it will help you make a very important choice. Like what to eat, maybe.