Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

When the Cloud Is Swamped, It's Edge Computing, AI to the Rescue

Powerful local processors can obviate the need for a device to have a cloud connection.

April 10, 2018
Edge Computing AI

Along the coastline of Australia's New South Wales (NSW) state hovers a fleet of drones, helping to keep the waters safe. Earlier this year, the drones helped lifeguards at the state's Far North Coast rescue two teenagers who were struggling in heavy surf.

The drones are powered by artificial-intelligence (AI) and machine-vision algorithms that constantly analyze their video feeds and highlight items that need attention: say, sharks, or stray swimmers. This is the same kind of technology that enables Google Photos to sort pictures, a home security camera to detect strangers, and a smart fridge to warn you when your perishables are close to their expiration dates.

But while those services and devices need a constant connection to the cloud for their AI functions, the NSW drones can perform their image-detection tasks with or without a solid internet connection, thanks to neural compute chips that let them perform deep-learning calculations locally.

These chips are part of a growing trend of edge-computing innovations that enable our software-powered devices to perform at least some critical functions without a constant link to the cloud. The rise of edge computing is helping solve problems new and old and paving the way for the next generation of smart devices.

Unburdening the Cloud

In the past two decades, the cloud has become the defacto way of hosting applications, with good reason.

"The thing that makes the cloud so attractive is that it tends to offload the cost of starting up any activity you want to perform," says Rob High, CTO of IBM Watson. "The cloud... allows people to... solve real problems today without having to go through the cost of infrastructure creation."

With ubiquitous internet connectivity and near-countless cloud applications, services, and development platforms, the barriers to creating and deploying applications have lessened considerably. The vast resources of cloud providers such as IBM, Google, and Amazon have boosted the development not only of trivial business applications but also of complex software that require vast amounts of computation and storage—AI and machine-learning algorithms as well as streaming and AR (augmented reality) applications.

But these advances have also created a challenge: Most of the applications we use can't function unless they are connected to the cloud. This includes most of the applications that run on computers and phones as well as the software in fridges, thermostats, door locks, surveillance cameras, cars, drones, weather sensors, and so on.

Talking Watson With IBM's Rob High
PCMag Logo Talking Watson With IBM's Rob High

With the advent of the Internet of Things (IoT), an increasing number of devices are running software and generating data, and most of them will require a link to the cloud to store and process that data. The amount of power and bandwidth required to send that data to the cloud is immense, and the space needed to store the data will challenge the resources of even the most powerful cloud behemoths.

"There's a lot of data that we're collecting in these systems, whether it's at the edge, or it's an IoT device, or any other place, that you could almost decide not to care about," High says. But if every decision must take place in the cloud, all that data will have to be sent across the network to cloud servers to be scrubbed and filtered.

As an example, High names modern airplanes, which contain hundreds of sensors that monitor jet engines and collect hundreds of gigabytes of status and performance data during each flight. "How much of that data really matters if you want to analyze it over an aggregate? Probably only a fraction of it," High says. "Why not just get rid of it at the source when it's not necessary for anything else you're doing?"

Doing what High suggests outside the cloud was previously all but impossible, but advances in low-power, low-cost System-on-Chip (SoC) processors have given edge devices more computing power and let them shoulder some of the computational burden of their ecosystems, such as performing real-time analytics or filtering data.

"There's so much data in the edge environment, it makes sense to bring some of the cloud computing capabilities into the computational capacity of the edge device," says High.

Privacy Concerns

edge computing AI chip

Edge computing benefits aren't limited to freeing up cloud resources.

Remi El-Ouazzane, New Technology Group and General Manager at Movidius (Intel), cites commercial security cameras as another example of when edge computing can make a huge difference. You see these cameras at traffic lights, in airports, and at the entrance of buildings, recording and streaming high-quality video across the network around the clock.

"The less data you need to haul back into a server or data center, the more scrubbing and finessing you can do locally, the better your overall cost of ownership will be from a storage and transfer perspective," El-Ouazzane says.

This means providing cameras with the power to analyze their own video feeds, determine which frames or lengths of video require attention, and send only that data to the server.

When those cameras are installed in your home, your office, or any private location, the connection to the cloud also becomes a potential security concern. Hackers and security researchers have been able to compromise the connection between home appliances and their cloud servers to intercept sensitive video feeds. Parsing the data locally obviates the need to have a video conduit between your home, your private life, and a service provider.

Movidius, which was acquired by Intel in 2016, is one of several startups that make computer chips specialized for AI tasks such as speech recognition and computer vision. The company manufactures Vision Processing Units (VPUs)—low-power processors running neural networks that analyze and "understand" the context of digital images without the need to send them back to the cloud.

AI edge computing movidius chip

The Movidius Myriad 2 is an always-on vision processor made for power-constrained environments.

"When the camera understands the semantics of what it's looking at, then the ability to impose rules as to what the camera can do or cannot do is becoming a very easy task," El-Ouazzane says. "You do not need to actually capture your living room for the next 12 hours just to know that, at a given time, your dog crossed the carpet in front of the sofa."

Other companies are exploring the use of specialized AI-powered edge computing to preserve user privacy. The Apple iPhone X ($999.00 at Verizon) , for example, is powered by the A11 Bionic chip, which can run AI tasks locally, allowing it to perform complicated facial recognition without sending the user's mugshot to the cloud.

More AI processing at the edge can pave the way for decentralized artificial intelligence, where users have to share less data with big companies to use AI applications.

Reducing Latency

Another problem with big cloud providers is that their data centers are located outside big cities, placing them hundreds and thousands of miles away from the people and devices using their applications.

In many cases, the latency caused from data traveling to and from the cloud can yield poor performance, or worse, fatal results. This can be a drone trying to avoid collisions or landing on uneven ground, or a self-driving car trying to decide whether it's running into an obstacle or a pedestrian.

Movidius's lightweight implementation of deep neural networks and computer vision makes its chips suitable for mobile edge devices like drones, for which power-consuming hardware such as GPUs are not feasible. Drones are a particularly interesting study, because they need low-latency access to AI computation and must keep functioning in offline settings.

AI edge computing movidius chip

Gesture detection as another area where edge computing is helping to improve the drone experience. "The goal is to make drones accessible for many people, and gesture seems to be a nice way for people to use them. Latency matters when you gesture the drone to perform some task," El-Ouazzane says.

For startups such as Skylift Global, which provides heavyweight drone services to rescue workers and first responders, low-latency access to AI and compute resources can save money and lives. "It will significantly cut data ingestion costs, reduce network latency, increase security, and help turn streaming data into real-time decisions," says Amir Emadi, the CEO and founder of Skylift.

Delivering supplies to first responders requires split-second decisions. "The more time that passes, for instance in fighting a wildfire, the costlier it becomes to remedy the situation. As our drones become capable of making real-time decisions at the edge even when they lose connectivity, we will be able to save more lives, money, and time," Emadi says.

Other domains in need of near-real-time computation are augmented- and virtual-reality applications and autonomous vehicles. "These are all experience-based computing environments. They're going to happen around the people," says Zachary Smith, CEO of Packet, a New York–based startup focused on enabling developers to access highly distributed hardware.

An AR or VR application that can't keep up with the movements of the user will either cause dizziness or prevent the experience from becoming immersive and real. And latency will be even more of a problem when self-driving cars, which rely heavily on computer vision and machine-learning algorithms, become mainstream.

"A 30-millisecond latency will not matter for loading your webpage but will really matter for a car to determine at 60mph if it should turn left or right to avoid crashing into a little girl," Smith says.

Meeting the Challenges of the Edge

Despite the need to bring computing closer to the edge, putting specialized hardware into every device might not be the final answer, Smith acknowledges. "Why not just put all the computers in the car? I think it really has to do with the evolution of how fast you can control the lifecycle of that," he says.

"When you put hardware into the world, it usually stays there for five to 10 years," Smith says, while the tech powering these experience-based use cases are evolving every six to 12 months.

Even very large companies with complicated supply chains often struggle with updating their hardware. In 2015, Fiat Chrysler had to recall 1.4 million vehicles to fix a security vulnerability that was exposed five years earlier. And giant chipmaker Intel is still scrambling to deal with a design flaw that exposes hundreds of millions of devices to hackers.

Movidius's El-Ouazzane acknowledges these challenges. "We know that every year we're going to have to change a range of products, because every year we're going to bring more intelligence at the edge, and we'll ask our customers to upgrade," he says.

To avoid constant recalls and to let customers make long-term use of their edge hardware, Movidius packs its processors with extra resources and capacity. "We need the ability for the next few years to perform upgrades on those products," El-Ouazzane says.

Packet, Smith's company, uses a different approach: It creates micro data centers that can be deployed in cities, closer to users. The company can then provide developers with very low-latency computational resources—as close as you can get to users without putting actual hardware at the edge.

"It is our belief that there will be a need for an infrastructure delivery mechanism to put hardware that can be accessed by developers in every city in the whole world," Smith says. The company already operates in 15 locations and plans to eventually expand to hundreds of cities.

But Packet's ambitions go further than creating miniature versions of the sprawling facilities operated by the likes of Google and Amazon. As Smith explains, deploying and updating specialized hardware isn't feasible with the public cloud. In Packet's business model, manufacturers and developers deploy specialized hardware at the company's edge data centers, where they can quickly update and refresh it when the need arises, while also making sure their users get superfast access to computing resources.

Hatch, one of Packet's customers, is a spin-off from Rovio, the mobile gaming company that created Angry Birds. The company runs Android on edge-computing servers to provide low-latency multiplayer-gaming streaming services to users with low-end Android devices.

"[Hatch] needs fairly specialized ARM servers in all these markets around the world," Smith says. "They have customized configurations of our server offering, and we put it in eight global markets across Europe, and soon it will be 20 or 25 markets. It feels like Amazon to them, but they get to run customized hardware in every market in Europe."

Theoretically, Hatch could do the same thing in the public cloud, but the costs would make it an inefficient business. "The difference is between putting 100 users per CPU versus putting 10,000 users per CPU," Smith says.

Smith believes this model will appeal to the developer generation that will drive the next software innovations. "What we're focused on is how to connect the software generation, people who grew up in the cloud, with specialized hardware primitives," Smith says. "We're talking about users who can't even open their MacBook to look inside, and that's the person who's going to innovate on the hardware/software stack."

Will the Clouds Dissipate?

With edge devices becoming capable of performing complicated computational tasks, is the future of the cloud in danger?

"To me, edge computing is a natural and logical next progression of cloud computing," says IBM Watson's High.

In fact, in 2016, IBM rolled out a set of tools that let developers seamlessly distribute tasks between the edge and the cloud, especially in IoT ecosystems, where edge devices already collect a lot of data about their immediate environment. And in late 2016, Amazon Web Services, another major cloud development platform, announced Greengrass, a service that enables IoT developers to run parts of their cloud applications on their edge devices.

None of this means the cloud is going away. "There's just a lot of things that are better done in the cloud, even when a lot of work is still being done on the edge," High says. This includes tasks such as aggregating data from many different sources and doing large-scale analytics with huge datasets.

"If we need to create models in the AI algorithms that we use in these edge devices, creating and training these models still is a very massive computational-intensive problem and oftentimes requires computational capacity that far exceeds what's available on these edge devices," High says.

El-Ouzzane agrees. "The ability to train AI models locally is extremely limited," he says. "From a deep learning standpoint, the training has only one place to sit, and it's in the cloud, where you get enough compute resources and enough storage to be able deal with large datasets."

El-Ouazzane also provisions use cases where edge devices are assigned with mission- and time-critical tasks, while the cloud takes care of the more advanced inferencing that is not latency-dependent. "We're living in a world of continuity between the cloud and the edge."

"There's a very symbiotic and synergistic relationship between edge computing and cloud computing," High says.

Fast Forward: Super Computing with Lenovo's Brian Connors
PCMag Logo Fast Forward: Super Computing with Lenovo's Brian Connors

Get Our Best Stories!

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

TRENDING

About Ben Dickson

Ben Dickson

Ben Dickson is a software engineer and tech blogger. He writes about disruptive tech trends including artificial intelligence, virtual and augmented reality, blockchain, Internet of Things, and cybersecurity. Ben also runs the blog TechTalks. Follow him on Twitter and Facebook.

Read Ben's full bio

Read the latest from Ben Dickson