Artificial intelligence technologies have become a powerful way of creating value.

Many AI applications have been widely adopted, for example predictive analysis in recommendation engines for social networks and streaming platforms, or face and voice recognition in smartphones and voice assistants. In 2017, 99% of AI-related semiconductor hardware was centralised in the cloud, controlled by only a handful of players including Intel and Nvidia, and represented around USD5.5bn market value.

However, as sensors and compute technologies continue to evolve and become more affordable, AI is spreading to industries including automotive, industrial automation, and healthcare. These applications require low latency, reinforced security and data privacy, none of which can be provided by the current cloud computing architecture. As a consequence, we are moving to a more distributed landscape, where the AI computing capabilities increase at the edge of the IoT node, opening a new paradigm: edge computing.

In this paper, we explore the fundamental differences between cloud computing and edge computing architectures. Looking at different use cases across several industries, we highlight why edge computing is gaining so much attention. We also focus on the related semiconductor market, which is expected to soar from around USD100m to USD5.5bn by 2025. Unlike the cloud, it appears that edge computing will benefit a much broader set of players, from start-ups to well-established microcontroller players and outsourced embedded computing companies.

Download report