Finding Things in an Image in Real Time on the Edge

Recent advances in technology have greatly broadened the scope of object detection and related computer vision (CV) services. Hardware with advanced features paired with smarter neural networks has attracted developers and data scientists from numerous industries to start leveraging computer vision to solve complex business challenges. Combined with the rising popularity of embedded devices capturing data on the edge, computer vision on a grand scale has been exploding with seemingly endless potential to revolutionize the way the world collects and analyzes real-world data.

Software developer working on an embedded device for object detection


What is Object Detection?

Object detection is the process of identifying objects within images, often done in real time. For example, object detection can identify and isolate instances of cars, humans, bikes, and buses from a real-time video feed of a busy street. It isolates objects of interest from the image through recognition, localization, and classification. The primary goal of object detection is simply to identify and label the presence of objects.

Detection allows developers to employ additional core computer vision functions — including object tracking and counting, image classification, and more — to equip their devices with machine learning capabilities. For example, a security camera in a retail shopping mall could be trained to detect the presence of objects in a store, and then be further trained to classify one of those objects — such as a person — more narrowly by gender, age range, or other identifying characteristics. 

Working with Embedded Devices

Working with embedded devices on the edge means building applications for deployment on system-on-chip (SoC) environments such that data processing happens directly on the device rather than on a cloud server. Utilizing embedded devices can resolve a number of issues that are caused by relying on the cloud, including the latency issues and high bandwidth requirements typically involved in handling image data. 

Apply now for our private Beta program to start building for your board.

For example, a simple dual-core ARM chip lacking a GPU can successfully support machine learning applications with memory to spare. However, working with edge devices must be thought through strategically, and considerations must be made for constraints such as the processing power and storage capabilities of the device. These device-side considerations affect your development decisions, including the size of the model that you use in your application. A larger model means more processing power, but it can also slow the processing speed of your device. If you know your intended application will be working in a resource-constrained environment, you may choose to use a smaller model or to optimize a large model by quantizing and pruning it to make it more efficient. 

Advantages of Edge Computer Vision

Edge devices do not require an internet connection for functionality as the computing for object detection is done entirely on the device itself. This real-time processing can be crucial when safety is a concern — for instance, a self-driving car needs to be able to perform and make decisions without latency issues, and it can do this because it does not require a cloud-connected data analytics process. This is a major advantage of embedded systems — they do not need to relay their data, await processing, and then respond according to the results as would be required with cloud-tethered computer vision solutions. Consequently, capturing and processing real-time data at the source is quickly becoming essential for today’s businesses.

Apply to Join our Beta Program

We are providing professional developers with a simple and easy-to-use platform to build and deploy computer vision applications on embedded devices.

The alwaysAI Private Beta program is currently accepting applications. Apply now!

APPLY FOR BETA