Skip to main content

How ARM is spreading into Windows PCs and the edge of the internet

Rene Haas, EVP and president of the IP Products Group at Arm.
Rene Haas, EVP and president of the IP Products Group at Arm.
Image Credit: Dean Takahashi

ARM executive Rene Haas is responsible for the shipment of billions of chips. As the executive vice president and president of IP Products Group at ARM, he interacts with the customers that license ARM’s product designs and use them in chips that are made in very large numbers.

Thanks to ARM’s domination of smartphones, the company and its customers have shipped 120 billion chips to date. But there’s a big opportunity in the expanding internet of things, or making everyday objects smart and connected. As those devices become interoperable and voice-controlled, they need more computing power. And ARM is making sure that its processors are used to provide it.

Over time, the goal of ARM’s new owner, SoftBank CEO Masayoshi Son, is to create the artificial intelligence needed for the Singularity, or the day when collective machine intelligence is greater than that of collective human intelligence. ARM’s job is to push AI into the edge of the network, where the company’s small, power-efficient chips are a natural choice. But it is also pushing into servers, where Intel has a newfound vulnerability, and into Windows 10 computers, which now work with ARM chips.

I caught up with Haas at CES 2018, the big tech trade show in Las Vegas last week.

Here’s an edited transcript of our interview.

Above: ARM-based processors are powering the dashboards in cars.

Image Credit: Dean Takahashi

VentureBeat: So are you busy trying to make the Singularity happen?

Rene Haas: I essentially run what was the classic ARM, pre-Softbank. All the IP business, the product development, licensing, sales, marketing, for all the products. We’re based in Cambridge. I moved over to London about a year ago. I spend most of my time there.

VB: What does the acquisition mean for what you do day to day?

Haas: Without making the role sound larger than it is, it’s essentially the CEO of the IP group, which was what ARM was prior to Softbank. Post the acquisition, we accelerated some efforts around another business, around connected devices, specially software as a service. You’ve heard of Mbed Cloud, right? Mbed Cloud and the strategy around managing connected devices and building a business around downloading software updates, security, and so on.

We created a business unit around that, ISG. It stands for IOT Services Group. It’s still a nascent organization, but the decision was made to create two independent operating groups, because they attack different markets and different customers. At the executive level Simon is still the CEO, so the enterprise functions — enterprise marketing, legal, finance — are all cross-functional. But now this group I run is pretty autonomous in terms of everything relative to owning the top line P&L, owning revenue.

VB: What’s your to-do list for 2018?

Haas: Now that we’re part of SoftBank, some things have changed. Some things are the same. We’re still a publicly facing company in the sense that we’re part of the SoftBank number, but we don’t have to report numbers quarterly with the same level of introspection as we did in the past. As a result of that, we have some more freedom to invest a bit more aggressively in new markets. That’s a big thing for us in 2018, accelerating our investments in areas like machine learning, AI, doubling down on areas like security. Automotive is a big push for us. A lot of the markets we’ve been involved in, the big difference for 2018 is the acceleration of those investments.

VB: I went to Samsung’s press event. Their interest is in pushing SmartThings as the standard for IOT. On a high level it makes sense for one big company have one way to connect to connected devices and bring everything else in. I wonder how easily some of this is to happen. Is every big company going to have their version of this? Are they going to be interoperable? Are these devices really going to connect and work together?

Haas: This year, all the announcements of products that are Alexa ready or Google Assistant ready — a year ago nobody was even thinking about that. I think what will happen is you’ll have standards around the input methodology, whether it’s voice or whatever. Underneath the hood people will try to put their special sauce on it. A Samsung-only interface or an LG-only interface for consumer devices, that’s hard. I think it needs to be standard around some level of API, something that’s ubiquitous with another part of the platform.

From our standpoint it’s a huge opportunity for us, because we also see — this is a big 2018 initiative. The rush of compute moving to the edge and the need to do more and more local processing, less dependent on the cloud to do every bit of the processing piece. That’s just going to go off and accelerate, particularly as devices learn, in the context of the machine learning piece. The profile for what the learning algorithm looks like for your own devices, the performance and benefits you get as that’s more personalized and done locally, that will be pretty huge. We’re seeing an uptick there.

Above: Will the internet of things be interoperable?

Image Credit: Dean Takahashi

VB: How far along in the process do you feel like everybody is now, the standards process? Does it feel like things are going to be interoperable sometime soon?

Haas: I default to waiting and seeing who the winner ultimately will be. But devices that are Google Assistant ready, Alexa ready, I see those more as default standards, as opposed to a set of companies all getting together and trying to decide, “This is the actual standard.” That’s hard. It’s like the smart TVs you bought in the early phase that had their own web browsers and interfaces. It’s clumsy in terms of interoperability, clumsy for the end user. The stuff Google and Amazon are doing is going to accelerate it. We’re in a good spot, because that’s the technology that underpins us.

VB: Blockchain is part of some of this, but does that come on your radar in any ways, on the silicon level?

Haas: Just from the standpoint of the processing that’s required for it, what’s required in terms of security. But in terms of the interface and what’s going on inside, not so much.

VB: I talked quite a long time with Phil Rosedale, who created Second Life, and now he has this company High Fidelity. They can create a bunch of things for an avatar to wear, sell those, and then log that transaction in a blockchain. Then it becomes interoperable with other virtual worlds. If you buy something in High Fidelity maybe you could use it in Second Life. Your avatar travels with you and all the stuff you bought. It seems like IOT transactions might work in a similar way.

Haas: Potentially. But blockchains are static. With the real-time issue when it comes to payments, you need some kind of other type of methodology. In the area of crypto and anything going on with security of payments, that’s a very central area for us. It takes a lot of processing. That’s something that requires some level of standardization. Different countries have different laws and bars in terms of threats and the like.

Because China has so much control — talking about mobile, all the carriers in China are state-run. Getting an illegal SIM card is very hard. Fraud is prevented by your identity, your mobile number. As a result, mobile payments are ubiquitous in China. In North America we’re way behind. But a lot of it has to do with the way payments are set up, the relationships between banks and so on.

It’ll be interesting to see what happens in China. The government has such tight controls on monetary issues. I lived in China for a couple of years, so I lived through this. Taking money out of the country is really hard. But now, with Tencent and Alibaba as really large merchants, the government can’t see where all the money is going, particularly if it travels outside of China. They’re already getting their nose into trying to take partial ownership of these companies.

Above: Smart cities need a lot of processors.

Image Credit: Dean Takahashi

VB: How you architect a blockchain depends on what kind of government is overlooking you.

Haas: Exactly.

VB: When you have conversations about blockchain within ARM, what do you have to think about?

Haas: Primarily we’re focused on edge compute. When we think about blockchain and the things required around security and local processing, it’s all about power and area. Machine learning is a big spot there for us, because you’ll need to do some level of neural network processing to handle the data. Whether a GPU is the right thing — if you’re putting it in an edge device, power is a big issue. Solving those issues in the cloud, one way would be GPUs, but for us, it’s more about the edge. We’re looking at all kinds of different architectural methodologies there. Nothing we’re talking about publicly yet.

VB: There’s all the talk about the CPU flaw. Is there any easy way to describe it and reassure people?

Haas: We had a lot of conversations on that. It’s interesting that it’s called a “CPU flaw,” because it’s actually — researchers have found a hole in modern programming techniques to potentially subvert some code. It impacts more high-end CPUs than low-end CPUs because it’s all about speculative processing and cache control. It does require a massive amount of coordination across the ecosystem. It’s not just an Intel CPU problem or an architecture problem. It’s a modern compute problem. Chip vendors, OEMs, software vendors, all of us need to work together.

You have to look at the workloads. It’s very workload dependent. Again, the issue is around this methodology of speculative caching. It’s basically how much predictability you want to do. Some of the patches slow that speculative process down or eliminate it, which in layman’s terms — let’s say you’re driving between Phoenix and Los Angeles and the speed limit is 60, but you know you can get away with 80 because there’s no radar checks. But if you find out there’s a speed camera every three miles, you just go 60 the whole time. If you figure out the cameras are 100 miles apart, you go back to 80 most of the time and slow down for the cameras.

It just ends up being, with the patches, how much of these speculative caching workloads get compromised. That’s a function of software and hardware together. I don’t know if you’ve seen any of the benchmarks, but it’s very workload dependent, very much a function of how aggressive the patches get as far as slowing down this methodology.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.