Smarter Computing

Work smarter, not harder. That’s the phrase that people like to use when talking about how being more efficient in one’s efforts can often have a greater reward.

It’s also starting to become particularly appropriate for some of the latest advances in semiconductor chip design and artificial intelligence-based software efforts. For many years, much of the effort in silicon computing advancements was focused on cramming more transistors running at faster speeds into the same basic architectures. So, CPUs, for example, became bigger and faster, but they were still fundamentally CPUs. Many of the software advancements, in turn, were accomplished by running some of the same basic algorithms and program elements faster.

Several recent announcements from AMD and nVidia, as well as ongoing work by Qualcomm, Intel and others, however, highlight how those rules have radically changed. From new types of chip designs, to different combinations of chip elements, and clever new software tools and methodologies to better take advantage of these chip architectures, we’re on the cusp of seeing a whole new range of radically smarter types of silicon that are going to start enabling the science fiction-like applications that we’ve started to see small glimpses of.

From photorealistic augmented and virtual reality experiences, to truly intelligent assistants and robots, these new hardware chip designs and software efforts are closer to making the impossible seem a lot more possible.

Part of the reason for this is basic physics. While we can argue about the validity of being able to continue the Moore’s Law inspired performance improvements that have given the semiconductor industry a staggering degree of advancements over the last 50 years, there is no denying that things like the clock speeds for CPUs, GPUs and other key types of chips stalled out several years ago. As a result, semiconductor professionals have started to tackle the problem of moving performance forward in very different ways.

In addition, we’ve started to see a much wider array of tasks, or workloads, that today’s semiconductors are being asked to perform. Image recognition, ray tracing, 4K and 8K video editing, highly demanding games, and artificial intelligence-based work are all making it clear that these new kinds of chip design efforts are going to be essential to meet the smarter computing needs of the future.

Specifically, we’ve seen a tremendous rise in interest, awareness, and development of new chip architectures. GPUs have led the charge here, but we’re seeing things like FPGAs (field programmable gate arrays)—such as those from the Altera division of Intel—and dedicated AI chips from the likes of Intel’s new Nervana division, as well as chip newcomers Google and Microsoft, start to make a strong presence.

We’re also seeing interesting new designs within more traditional chip architectures. AMD’s new high-end Threadripper desktop CPU leverages the company’s Epyc server design and combines multiple independent CPU dies connected together over a high-speed Infinity Fabric connection to drive new levels of performance. This is a radically different take than the traditional concept of just making individual CPU dies bigger and faster. In the future, we could also see different types of semiconductor components (even from companies other than AMD) integrated into a single package all connected over this Infinity Fabric.

This notion of multiple computing parts working together as a heterogeneous whole is seeing many types of iterations. Qualcomm’s work on its Snapdragon SOCs over the last several years, for example, has been to combine CPUs, GPUs, DSPs (digital signal processors) and other unique hardware “chunks” into a coherent hole. Just last week, the company added a new AI software development kit (SDK) that intelligently assigns different types of AI workloads to different components of a Snapdragon—all in an effort to give the best possible performance.

Yet another variation can come from attaching high-end and power demanding external GPUs (or other components) to notebooks via the Thunderbolt 3 standard. Apple showed this with an AMD-based external graphics card at their last event and this week at the SIGGRAPH computer graphics conference, nVidia introduced two entries of its own to the eGPU market.

The developments also go beyond hardware. While many people are (justifiably) getting tired of hearing about how seemingly everything is being enhanced with AI, nVidia showed a compelling demo at their SIGGRAPH press conference in which the highly compute-intensive task of ray-tracing a complex image was sped up tremendously by leveraging an AI-created improvement in rendering. Essentially, nVidia used GPUs to “train” a neural network how to ray-trace certain types of images, then converted that “knowledge” into algorithms that different GPUs can use to redraw and move around very complex images, very quickly. It was a classic demonstration of how the brute force advancements we’ve traditionally seen in GPUs (or CPUs) can be surpassed with smarter ways of using those tools.

After seeming to stall for a while, the performance requirements for newer applications are becoming clear—and the amount of work that’s still needed to get there is becoming clearer still. The only way we can start to achieve these new performance levels is with the types of heterogeneous chip architecture designs and radically different software approaches that are starting to appear.

Though some of these advances have been discussed in theory for a while, it’s only now that they’ve begun to appear. Not only are we seeing important steps forward, but we are also beginning to see the fog lift as to the future of these technologies and where the tech industry is headed. The image ahead is starting to look pretty good.

Published by

Bob O'Donnell

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a technology consulting and market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.

3 thoughts on “Smarter Computing”

Leave a Reply

Your email address will not be published. Required fields are marked *