X
Tech

Spectre puts the brakes on CPU need for speed

Instead of focusing on single-digit percentage increases in performance, a bigger question needs to be asked: Is it safe?
Written by Chris Duckett, Contributor

Hallelujah! If any good is to come from the revelations of Meltdown and Spectre, it will be acknowledging that performance increases in silicon were made on a foundation of sand, and the security tide has come in.

Goodbye ridiculous comparisons made for spurious reasons in order to make new silicon that isn't much faster than last year's silicon appear much better.

Meltdown-Spectre: Intel warns of risk of sudden reboots

Case in point, from twelve months ago when Intel expanded its Kaby Lake family:

For its H-series core chips, Intel is touting a 20 percent "productivity improvement", but that comparison is against a 2013 22-nanometre i7-4700HQ running at a base frequency of 2.4GHz and using 8GB of DDR3 memory compared to the 14nm i7-7700HQ running at 2.8GHz and packing 16GB of DDR4 memory.

In order to promote a one-fifth improvement stat, Intel had to dig up an old chip that was manufactured on a process 1.5x times bigger, used DDR3 memory instead of DDR4, and whacked twice as much memory into the new system for good measure.

But in 2018, we get to ask one additional question: Is the chip safe?

spectre-logo.png

The combination of Meltdown and Spectre is going to force changes to the way CPUs are designed and operate, and until those fixes in silicon appear, we are left with fixes in software that will impact performance for a particular "real" class of computational tasks.

Take the experience of Epic Games, which posted a lovely graph of how the Meltdown patches alone were hitting its CPU usage.

Comparing to the example quoted above, in the case of Epic Games, it could be said that thanks to Meltdown, its systems are now performing in the same range as brand-new silicon from 2013. So much for a half-decade of eeking out single-digit percent increases in throughput.

Even the best case scenario put forward by Google, that there is no material effect on performance from its Retpoline patches, that impact is still put at 5 to 10 percent for "well tuned servers" in a less marketing-friendly commit to the LLVM project. For statically linked C++ applications that have many context switches, the search giant has seen overheads of 10 to 50 percent.

Obviously the actual overheads experienced by users will only be known in retrospect, but they do exist, and certain applications will run up against them.

In the near future, expect more arguments over the correctness and security of processors than quibbles over who has more cores or turbo modes or the most gigahertz. Because what is the point of crowing about a 20 percent performance gap over the competition if another vendor can simply point to a proof of concept that shows the quicker chip is susceptible to a side-channel attack?

It's a different way of thinking about processors, and reflects that a great deal of computing now happens on shared hardware far away in the cloud, and solving the problems it amplifies from the old world of local dedicated hardware.

    Editorial standards