Skip to main content

Intel plans to ship its first-generation Neural Network Processor by the end of the year

Image Credit: Alexander Tolstykh / Shutterstock

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Intel’s hardware for accelerating AI computation is finally on its way to customers. The company announced today that its first-generation Neural Network Processor, code named “Lake Crest,” will be rolling out to a small set of partners soon to help them drastically accelerate how much machine learning work they can do.

The NNPs are designed to very quickly tackle the math that underpins artificial intelligence applications, specifically neural networks, a currently popular branch of machine learning. One of the big problems with the large, deep neural networks that are popular right now is that they can be very computationally intensive, which makes them harder to test and deploy rapidly.

At first, the NNPs will only get released to a small number of Intel partners which the company plans to begin outfitting before the end of this year. The hardware is being developed in close collaboration with Facebook, one of the companies that’s trying to push the boundaries on rapid development and testing of neural nets.

Customers will be able to access the NNPs through Intel’s Nervana Cloud service, though the company plans to make the hardware more available in the future, according to Naveen Rao, the vice president and general manager of Intel’s AI products group.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

Observers should expect rapid iteration on the new silicon, with a faster release cadence than some of Intel’s other products. Rao said that the current fast-moving nature of the AI field means that customers want new neural network chips with new capabilities as quickly as possible, in contrast to the stability needs for CPUs and other hardware.

“When you’re working with a CPU, there are a lot of expectations put on a CPU,” Rao said in an interview. “And we’re very thoughtful about additions and changes to the CPU architecture. When you’re in a vastly changing field like neural networks, it’s valued more to iterate quickly.”

Right now there are three generations of the silicon currently in flight at Intel, and the company plans to hit at least a yearly cadence for its hardware releases.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.