February 27, 2020

Is hardware or software more important for Edge AI?

As we enter 2020 and move past the early-year excitement of the newest consumer electronics, it’s a good moment to acknowledge other technologies coming into primetime.

Edge AI’s potential is tied to the explosive growth in IoT and connected devices. It has a component tied to 5G, and with great implications for what we’re looking to invest in to further Verizon.

In understanding what is more important for powering Edge AI –  hardware or software – the short answer is both, as hardware and software are interdependent for unlocking Edge AI’s potential for the consumer.

Edge AI, and what it means to connectivity, entails a quick look back to infrastructure. Cloud, which marked a pivotal moment, is now being replaced in some cases by Edge infrastructure – which unlocks information data and makes it actionable in real-time. 

Edge infrastructure (or network edge) is where data is collected and resides. What’s more, edge is increasingly important as IoT grows. Connected devices are now set to reach 200 billion this year. IoT is a use case for AI as the number of edge AI devices is forecasted to grow to 2.6 billion units by 2025, according to Tratica.

Scalability, excessive power consumption, connectivity, and latency are some of the many factors driving the demand for edge infrastructure in the form of microdata centers or distributed computing architecture.

Processing devices are now embedding AI capability into IoT devices at the network edge. In Edge AI, the AI algorithms are processed locally on a hardware device. Edge AI can respond quickly. Since the inference can be done locally, it bypasses the need for an expensive data upload or costly compute cycle in the cloud.

Edge AI and Hardware

Since Edge AI means that AI algorithms are processed locally, on the network edge or on a hardware device, the algorithms are using data (sensor data or signals) that are created on the network or the device. A device using Edge AI does not need to be connected in order to work properly; it can process data and act on it independently, without a connection. In order to use Edge AI, the user needs a device comprising a microprocessor and sensors. An edge application should be the approach when there is a need for real-time decision making, where low latency or no lag time is necessary, and where the network itself may not always be available.

Mobile phones, wearables, smart speakers, PCs/tablets, autonomous cars, drones, robots, and security cameras will all drive greater use of Edge AI. 

Depending on the AI application and device category, there are several hardware options for performing AI edge processing, including CPUs, GPUs, TPUs, Application-Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGA) and System-on-a-Chip (SoC) accelerators.

Edge AI and Software

IoT edge software enables edge intelligence and real-time analytics at the edge, machine learning, automation, and targeting inference. Companies developing these offerings are enhancing traditional deployments with edge AI capabilities to enable them to scale more quickly, accelerate time-to-value, and enable exciting new use cases, such as anomaly detection for predictive maintenance, defect detection, and others. Optimization software can help achieve greater efficiency with edge AI models. Software that is multi-edge computing (MEC) agnostic is also a primary space for investment consideration. It is these types of companies that we at Verizon Ventures are targeting for our strategic investment. 

Edge AI

Since software to an extent “unlocks” hardware, it’s challenging to argue that one takes priority over the other in driving the potential of Edge AI. As Edge AI continues to unfold, there will be more possibilities to explore and with positive results for the consumer – moving AI and other analytics closer to where data is created delivers faster and better user experiences.



Read more from Verizon Ventures:




Tags: Hardware , Verizon Ventures , Software , AI