What is Edge AI?
Edge computing helps make data storage and computation more accessible to users. This is achieved by running operations on local devices like laptops, Internet of Things (IoT) devices, or dedicated edge servers. Edge processes are not affected by the latency and bandwidth issues that often hamper the performance of cloud-based operations.
Edge AI combines edge computing with artificial intelligence (AI). This involves running AI algorithms on local devices with edge computing capacity. Edge AI does not require connectivity and integration between systems, allowing users to process data on the device in real time.
The majority of AI processes are currently performed in cloud-based centers, because they require substantial computing capacity. The downside is that connectivity or network issues can result in downtime or significant slowdown of the service. Edge AI eliminates these issues by making AI processes an integral part of edge computing devices. This enables users to save time by aggregating data and serving users, without communicating with other physical locations.
This is part of our series of articles about machine learning operations.
In this article, you will learn:
What Are the Benefits Of Edge AI?
Real-Time Data Processing
The most important advantage of Edge AI is that it brings high-performance computing capabilities to the edge, where sensors and IoT devices are located. AI edge computing enables AI applications to run directly on field devices, processing field data and run machine learning (ML) and deep learning (DL) algorithms.
Data processing in the cloud takes seconds. Data processing at the edge, on the other hand, can take milliseconds or less. For example, autonomous vehicles performing data processing at the edge can make decisions much faster than if data is processed in the cloud. Since these decisions impact human lives, near real-time data processing is critical.
Privacy
Edge AI operations perform the majority of data processing locally on an edge device. This means that less data is sent to the cloud and other external locations. As a result, the risk that data might be misappropriated or mishandled is reduced.
However, this does not mean that data is secured or protected from hackers and other security threats. For these purposes, the Trusted Platform Group has created the TPM 2.0 hardware security standard, which ensures that edge devices have secure data storage, encrypted authentication, and data integrity auditing.
Reduction In Internet Bandwidth and Cloud Costs
Edge AI does most of its data processing locally, sending less data over the internet and thus saving a lot of Internet bandwidth. Also the cost of cloud-based AI services can be high. Edge AI lets you use expensive cloud resources as a post-processing data store that collects data for future analysis, not for real-time field operations.
Using Less Power
Because Edge AI processes data at the local level, it saves energy costs. Edge computing devices are designed with highly efficient power consumption, means that the power requirements for running AI at the edge are much lower than in cloud data centers.
Edge AI Use Cases and Applications
There is a wide range of Edge AI applications. Notable examples include facial recognition and real-time traffic updates on semi-autonomous vehicles, connected devices, and smartphones. Additionally, video games, robots, smart speakers, drones, wearable health monitoring devices, and security cameras are all starting to support Edge AI.
Here are a few areas where Edge AI will grow in usage and importance:
- Security camera detection processes—traditional surveillance cameras record images for hours, and store and use the data as needed. However, with Edge AI, the algorithmic process runs on the system itself in real time, allowing the camera to detect and handle suspicious activity in real time. This ensures a more efficient and less expensive service.
- Image and video analysis—for example, Edge AI can help generate automated responses to audiovisual stimuli in robots. It can also be used for real-time recognition of spaces and scenes.
- Improve effectiveness of Industrial Internet of Things (IIoT)—AI algorithms can monitor potential defects and errors in the production chain, and enable real-time adjustments to production processes.
Edge AI technology can be applied to any purpose. During the COVID-19 crisis, for example, AI-powered solutions were deployed to deliver accurate information in real time. For example, in healthcare, AI deployed in field medical devices helps monitor, test and treat patients more effectively.
Cloud Computing AI vs. Edge AI vs. Federated Learning
Let’s look at three different approaches to deploying AI training and inference, and their pros and cons.
Cloud Computing AI
Cloud computing provided highly scalable, low cost hardware, which was compelling for AI because it allowed organizations to train large-scale models quickly. However, while the cloud is highly suitable for model training, it may be challenging for inference for inference—using AI models to provide predictions in response to user queries.
Using the cloud for inference raises several challenges:
- Cloud-based inference may have issues with real-time responses, which are needed for many AI use cases. This is because it is necessary to transfer a request from an edge device to the cloud, then transfer the response back to the edge device.
- Even if the use case does not require real-time response, cloud-based inference has inherently high latency, which degrades the user experience.
- If the edge device has no Internet connection, or experiences connectivity issues, it cannot perform cloud inference at all. Even if the device has an Internet connection, it may not have sufficient bandwidth to transfer the relevant amount of data in a reasonable timeframe.
Edge AI
With Edge AI, AI models operate on the edge device, with no latency and no need for an Internet connection. This makes it possible to perform much faster inference and support real-time use cases.
However, Edge AI also raises several issues, because models need to be trained on an ongoing basis using data from the edge devices:
- The system needs to build a dataset by transferring data from a large number of edge devices to the cloud. This can be complex and difficult to achieve, depending on the connectivity and bandwidth available to the edge devices.
- Storing all the data in a central location creates privacy and compliance issues. Legislation like the GDPR makes it difficult to train AI models based on end user data, and a centralized database represents a security risk.
Federated learning
A new pattern known as federated learning can resolve the issues of cloud computing and edge-based AI. This pattern works as follows:
- AI models are trained on edge devices using their local data.
- Updates to the model are sent to a central server, without having to send over the actual edge device data, resolving many of the privacy and security issues.
- Model updates are merged into a consolidated model, and the updated model is pushed back to client devices.
Edge AI with Run:AI
Kubernetes, the platform on which the Run:AI scheduler is based, has a lightweight version called K3s, designed for resource-constrained computing environments like Edge AI. Run:AI automates and optimizes resource management and workload orchestration for machine learning infrastructure. With Run:AI, you can run more workloads on your resource-constrained servers.
Here are some of the capabilities you gain when using Run:AI:
- Advanced visibility—create an efficient pipeline of resource sharing by pooling GPU compute resources.
- No more bottlenecks—you can run workloads on fractions of GPU and manage prioritizations more efficiently.
- A higher level of control—Run:AI enables you to dynamically change resource allocation, ensuring each job gets the resources it needs at any given time.
Run:AI simplifies machine learning infrastructure pipelines, helping data scientists accelerate their productivity and the quality of their deep learning models.
Learn more about the Run:AI GPU virtualization platform.