Edge AI: The Future of Artificial Intelligence

Spread the love

This note articulates INSOFE’s point of view on the subject matter. At the outset, it is pertinent to clarify that Edge Computing (EC) and Edge AI (EAI) are highly interrelated and overlap a great deal. Edge AI is the sum of AI and EC. The key criterion being that there is a great deal of data to process and real-time responsiveness is essential. 

One can use AI in EC and vice-versa. This note focuses on the latter i.e., the application of EC in AI – EAI. EAI is a fast-developing sector of AI. It is christened as the next wave of AI. Simply put, AI is “the most common workload” in edge computing. 

Edge computing in Machine Learning and AI is about deploying Machine learning models on devices and making real-time predictions. Machine Learning techniques are used in analyzing large data sets to determine an outcome for that target population. With an increasing focus on automation, there is a greater need for using these ML techniques in mobile phones, cameras, robots, TVs, and other gadgets to do useful tasks.

As customers spend more time on their devices, businesses increasingly realize the need to bring essential computation onto the device to serve more customers. This is why the Edge Computing market will continue to grow rapidly in the coming years. Edge Computing is forecasted to reach 1.12 trillion marketing by the year 2023. According to Gartner, centralized data centers process 91 percent of today’s data. However, by 2022, over 74% of all data will require edge analysis and action.

The real value lies in combining EC with disruptive digital technologies like Big Data, IoT, and 5G. EC will proliferate AI capabilities, moving them out of the cloud. Netflix (Open Connect Program) and Microsoft Hololens (Edge Cloud) are prime examples.

Edge computing vs Cloud computing

Edge AI is a genuine (and rapidly growing) phenomenon, with applications ranging from smartphones and smart speakers to vehicle sensors and security cameras. We explain EAI by highlighting relevant use cases. Major application areas are Facial Recognition systems, Predictive Maintenance, Smart Infrastructure, Self-driven cars, Autonomous Drones, Digital assistants, Wearable health devices, etc.

Let’s take two scenarios to understand why and how edge computing is utilized in making predictions from a given data set.

A recommendation system in an online retail store. When you click on a product, before the site makes a recommendation, it has two ways of analyzing the selection. One way is to collect all the customer’s data on an extremely powerful external server and use a model to analyze the selection before making a suggestion. However, this option would have some latency and some data may be lost in transit.  

The other option is to load an ML model clone in each of the mobile phones.  When data is processed on the device there’s no need to deliver it to the cloud, which helps reduce traffic and ensure privacy. However, it would require frequent updates. In this case the former is a better option, hence cloud computing would be better suited to analyze the data to suggest other products to the consumer.

A self-driving car on the road.  The car is supposed to take a picture so the photo can be analyzed to indicate the presence of a traffic signal and the color of the traffic light.  Again, the designer has two ways of analyzing this information.  As mentioned earlier, send data to a powerful server or have a model deployed in the car.  In this case, latency or loss of data in the transmission is too dangerous and harmful for the passengers and hence Edge computing is the answer.

Hence, depending on the application, sometimes it makes sense to deploy the model on the device as opposed to having a distant server analyze all the information.  

Challenges that need to be solved to deploy Edge ML systems

Typically, the devices where ML models need to be deployed are not as powerful as the servers in the cloud.  For effective use, edge computing researchers focus on solving three different problems

  1. Creating powerful and cheaper hardware in the edge devices.  This helps them host more powerful models at lower expenses.
  2. Creating smaller models that work almost as efficiently as their larger counterparts.  This helps engineers deploy ML on less powerful hardware with similar efficiency.
  3. Cheaper short-range data transfer (Bluetooth etc.).  This helps edge devices transmit data to nearby devices cheaply and efficiently.

All three turned out to be hot areas of research and are seeing tremendous growth making edge AI ubiquitous.

Examples of Edge Computing AI applications

Many industries rely on edge technology; for example, driverless cars will assist reduce power usage by improving battery durability. It will also work with robotics, surveillance systems, and other devices. As a result, by 2023, the market for Edge AI software is predicted to rise from $355 million to $1.12 trillion.

Edge computing can also be used to track patients and alert care providers in a timely manner.  Smartwatches are adding a lot of “smartness” in recent times.  An individual’s pulse rate, blood pressure, oxygen levels, sleep patterns, etc. are being analyzed to get a real-time view of the individual’s health and provide a consolidated picture to the doctors in case of an illness.

Another example is using edge computing to track the performance of manufacturing plants to predict when poisonous gases are being leaked and stopping the furnace in case that happens. 

Edge or Cloud?

But there are instances where a hybrid approach may be needed.  

For instance, when designing the safety features of a self-driving car, data needs to be gathered at the individual level as well as compared across the entire population to understand the drawbacks of the designs and come up with good solutions. 

In such instances, a hybrid approach is utilized to gather data at different levels. The edge models help in taking care of the immediate driving needs while the global models assess better long-term designs.

Edge will work in a complementary way with the cloud. Data will continue to be processed in the cloud, but user-generated data that belongs only to users can be operated and processed on Edge.

Practicing moderation

However, like many other technologies’ engineers get too enthusiastic about edge computing.  While the technology is becoming feasible, data scientists should be careful about determining whether a technology requires edge computing or not. 

In one instance, a search engine advertising consultant was trying to understand consumer behavior to ascertain the kind of ads to show to their audience. 

The keywords they came across were chicken soup recipe, rain on me by Lady Gaga, and morning prayers.  The edge algorithms made wildly different recommendations as at face point, they appear to be the keywords used by varying age groups.  

However, a more in-depth analysis revealed that the information gathered was from the same IP address and was used by the same individual at different times in the day to suit the need of the hour in their routine.  So, the edge model actually unnecessarily complicated the situation.

The ethical aspects of collecting and predicting is another crucial thing to consider while building applications.

It is important to note that extraneous variables influence human interactions and behavior and hence applications should not be invented just because they are more feasible. 

It is important to ascertain the role edge computing would play in the technology and to assess the need of using such forms of analysis in making predictions while being cautious of the fact that some predictions can be misleading as well. 

 Research by INSOFE- https://www.insofe.edu.in/

 

Author Name: Madhulika Vajjhala


Spread the love