History of Computer Vision and Its Principles
by Nichole Peterson | Aug 26, 2019 | Computer Vision | 10 minute read
by Nichole Peterson | Aug 26, 2019 | Computer Vision | 10 minute read
Software and hardware technologies are advancing to a place where you can easily equip low-power, resource-constrained edge devices with AI computer vision capabilities. Here's a brief history of computer vision (CV), how it started, what CV is currently working and how you can use it to empower your existing equipment.
Ultimately, developers everywhere can build and deploy deep learning computer vision applications to make your business more functional, productive, and profitable.
CV got its start in the 1950s. Back then, it had almost zero resemblance to the real-time object detection and tracking technology you see now. The high cost and low power of sensors, data storage, and processing all played a role in limiting the industrial applications. Back then, the only place you saw computer image recognition was in academic studies — and science fiction. The first “seeing” robots, so to speak, were only tested on their abilities to recognize basic shapes in highly controlled conditions.
The only place you saw computer image recognition was in universities and science fiction novels. The first were tested on their abilities to recognize basic shapes in highly controlled conditions. Robots might not be thinking for themselves yet, but you can harness artificial intelligence (AI) to allow your devices to see for themselves. Various advancements — computer optics, processor development, computer science, big data — have made CV what it is today. Here's a timeline of computer vision breakthroughs for the past seventy years:
As you can see from the history of computer vision, in sixty years science fiction became reality. You can place your apps on standalone edge devices — no need for cloud computing resources, high-power devices, constant connectivity, and so on.
With the plethora of training data and models being developed today, CV applications can identify, recognize, and track nearly any object. You can now use it to identify just about anything humans can see. Also, you can use it to identify things the human eye can't detect, such as minuscule defects in highly refined products or cancerous cells during a medical procedures.
In general, it's clear that the world is heading towards using more computer vision-enabled devices, especially at the edge. Some estimates put the worldwide total of security cameras at around 350 million. If you add that to all the existing cameras on smartphones, smart home devices and so forth, it represents a major opportunity for businesses and enterprises to embed deep learning, decision-making capabilities into their existing equipment.
In the next five years, you will likely see greater adoption and use of edge CV applications spanning multiple industries. Security enterprises will rely on drone assistants to provide birds-eye-view perspectives. Grocery stores will keep customers safe and avoid liability by using cameras to detect spills immediately. Retail stores will optimize the layout of products on their floor as well as gain valuable data about which areas of a shelf customers focus on the most. Factories will use CV to perform preventative maintenance by catching defects in manufacturing line output early and often.
Here are a few examples in various industries where deep learning CV could play a key role:
Looking farther down the line, you may have a CV-powered self-driving car. It would be as ubiquitous of a safety feature as airbags and seatbelts are now. Car manufacturers are already incorporating CV applications into vehicles with driver assistance functionality, guiding parallel parking or braking a vehicle when it comes too close to an object.
Additionally, as future generations continue tech innovation, there exists a massive opportunity for implementing CV in everyday life, whether it's caring for kids, trying out a new recipe in the kitchen, or buying new clothes.
The core functions of CV all have to do with how your applications treat images and handle objects. Identifying the use case for computer vision as it applies to your business needs is the first step in using deep learning. As you enter the computer vision field and start to explore building it into your application, here are some important, high-level terms you’ll want to understand:
A company may look to drive down data transfer and storage costs by deploying deep learning capabilities directly on the device that is being manufactured or sold. Whether it is hardware limitations, connectivity issues, environmental factors or resource constraints, building deep learning applications for edge devices can be a significant challenge. Developers need to consider the size of their models, processing requirements for their app, and several other factors when working on an embedded device. It is also important to think about how a company can scale their application from prototype to hundreds (or thousands) of devices.
These challenges are being met with new entrants into the computer vision space aimed at keeping processing requirements low, file sizes down, and leveraging API platforms to plugin to core CV functions without the necessary time-intensive development process.
Create your account here as a first step to understanding how easily you can implement CV into your app. You’ll get access to our API platform, SDK, and pre-trained models to simply bring deep learning to your dev board.
We want to give all developers across any enterprise or industry the tools needed to deploy CV in a simple and affordable way without needing to dive into the deep inner workings of deep learning. Our robust SDK, pre-trained model libraries and open API platform make integrating CV a straightforward process.
What we promise to you:
We'd love for you to test out our platform. It will work on any 32- or 64-bit ARM-based developer board running Linux. We can’t wait to see what you build... Start deploying your CV solutions as quickly and easily as possible!