At alwaysAI, we want you to create unique and powerful models that help accomplish your computer vision goals. With the Model Training Toolkit you can create a custom object detection model with little experience and no coding. This video outlines the end-to-end process of doing exactly that - and in a way that is easy to follow. It is meant to be interactive, so you can pause it after each step to take action and then come back.
At alwaysAI we have the singular mission of making the process of building and deploying computer vision apps to edge devices as easy as possible. That includes training your model, building your app, and deploying your app to edge devices such as the Raspberry Pi, Jetson Nano, and many others. alwaysAI apps are built in Python and can run natively on Mac and Windows, and in our containerized edge runtime environment optimized...
Many models, including those for pose estimation, may have much better performance when run on a GPU rather than a CPU. In this tutorial, we’ll cover how to run pose estimation on the Jetson Nano B01 and cover some nuances of running starter apps on this edge device.
Building and running your app on alwaysAI can be done a few different ways, depending on the platform you want to develop on and the device you want to deploy on. We’ve concentrated these options in one place for your convenience and we’ll update this document as the platform evolves!
Training a computer vision model is one component of a complex and iterative undertaking, which can often seem daunting. At alwaysAI we want to make the process simple and approachable. To get you started, we have compiled a general overview of the training process of Deep Neural Networks (DNNs) for use in computer vision applications. We will focus on supervised learning in this overview, which uses labeled training data to teach the model what the desired output is. This article provides...
The Jetson TX2 is part of NVIDIA’s line of embedded AI modules enabling super fast computation on the edge. The TX2 is a leg up compared to the Nano and will give you faster inferencing times in your AI applications. In fact, the Jetson TX2 is the fastest, most power-efficient embedded AI computing device. This 7.5 watt supercomputer on a module brings true AI computing at the edge.
Please note: This setup guide can only be followed if you have a Linux computer. VM support is un-verified.
The process of developing computer vision applications has been greatly simplified by alwaysAI, which now includes native support for Mac OSX (Mojave and Catalina), and enables developers to get started prototyping applications right away with very little setup required.
In this guide, we’ll make an app that can count vehicles in real-time utilizing object counting, a technique used in computer vision that combines object detection and object tracking. Our final computer vision application will tell us how many objects of a specific kind are currently being detected in a video stream.
There are many use cases in which it could be beneficial to have automated text messages sent that contain data obtained from computer vision. Perhaps you’d like to be notified whenever a person or animal walks into your yard or house (by using object detection), or when your kids appear to be fighting (by using pose estimation), or any other number of scenarios — the possibilities are endless! In this tutorial, we’ll show you how easy it is to accomplish this task by using a very basic...
If you have a host of images that you’d like to sort based on the presence of particular things (like people, cars, buildings, etc.), using computer vision classifiers can make this a pretty simple and fast thing to accomplish.
Sockets are endpoints for inter-process communication over the network, which is supported by most platforms. Using sockets with the alwaysAI platform allows an application to communicate with external applications running locally or externally, as well as with applications written in different programming languages. There are many methods for inter-process communication, but cross-platform communication is handled best by sockets.
In this guide, we’ll be focusing on image classification. What is image classification? It is a technique used in computer vision to identify and categorize the main content in a photo or video.
The ability to recognize human activity with computer vision allows us to create applications that can interact with and respond to a user in real time. For instance, we can make an application that gives feedback to a user in the moment so that they can learn how to recreate the perfect golf swing, or that sends an immediate alert for help when someone has fallen, or that generates an immersive augmented reality experience based on the user's position.
The Raspberry Pi 3B+ and the Raspberry Pi 4 are ubiquitous among the hobbyist community of developers. They are reliable, easy to use single-board computers (SBCs) that are very affordable, making it easy to get your edge computer vision project up and running!
Detecting pedestrians and bicyclists in a cityscape scene is a crucial part of autonomous driving applications. Autonomous vehicles need to determine how far away pedestrians and bicyclists are, as well as what their intentions are. A simple way to detect people and bicycles is to use Object Detection. However, in this case we need much more detailed information about the exact locations of the pedestrians and bicyclists than Object Detection can provide, so we’ll use a technique called...
The Jetson Nano is a powerful compactly-packaged AI accelerator that allows you to run intensive models (such as the ones typically used for semantic segmentation and pose estimation) with shorter inference time, while meeting key performance requirements. The Jetson Nano also allows you to speed up lighter models, like those used for object detection, to the tune of 10-25 fps.
Detecting people can be an important part of applications across many industries. Common use cases include security applications that track who’s coming and going, as well as safety systems designed to keep people out of harm’s way.
At the AWS re:Invent conference, we deepened our collaboration with Qualcomm® Technologies by demonstrating real-time object detection and pose estimation on the Qualcomm® Robotics RB3 platform. Based on the Qualcomm® SDA845 system-on-a-chip (SoC), the RB3 platform enables the creation of high-performing computer vision applications on robots and other IoT devices. We built an application on a demo robot that showcased the powerful combination of the Qualcomm® Robotics RB3 platform, our deep...
alwaysAI offers a number of starter apps that make it easy to quickly deploy computer vision (CV) based applications. In this demo, I'm going to show you how to extend one of these starter apps and, hopefully, provide some insight about how you can create your own custom CV apps. The app we're going to end up with is meant to be used on a conference booth, to track the number of attendees who stop by and provide some basic metrics on how much time they spend at the booth.
The process of developing computer vision applications has been greatly simplified by alwaysAI, which includes native support for both Windows and Linux, and enables developers to get started prototyping applications right away with very little setup required.
In this tutorial, we will show you the steps needed to change the computer vision model in the alwaysAI application. First, set up your development computer and edge device (if you're using one). You should also have an app running like this object detector starter app. You can see more about setting up projects and the alwaysAI workflow here. Finally, you should have a Terminal window open.
If you don't have an alwaysAI account yet, you can sign-up here.
In this tutorial, we will show you the steps needed to get a real-time object detector starter app up and running quickly and easily on an edge device. You should have already set up your development computer and installed the alwaysAI CLI. For more information on system requirements and supported boards, check out our Docs.
In this tutorial, we will show you the steps needed to boost the performance of your edge device. You will need a hardware accelerator that is supported by alwaysAI – such as Intel’s Neural Compute Stick 2. You can read more about supported edge devices and how to set the engine and accelerator using the edgeiq API in our documentation.
Although alwaysAI is focused on computer vision on the edge, you can easily install the platform on your local PC and do prototyping before going to the edge. In this article, I will show how to install the alwaysAI platform on your PC using either a virtual machine (macOS, Windows) or native installation (Linux). If you are already running a Linux desktop you can skip down to the Installing alwaysAI Platform section.
Note: The method described in this article is not your only option for installing the base operating system on your Raspberry Pi. You can also use NOOBS, an operating system installation manager to install the base Raspbian operating system. For information on NOOBS go to this website https://www.raspberrypi.org/documentation/installation/noobs.md . If you do use NOOBS to do the initial operating system installation once finished go to Setting Up A Docker Container section of this document...
alwaysAI provides a platform to deploy computer-vision applications onto edge devices.Learn More