The alwaysAI Blog

alwaysAI announces exciting new features

If you haven’t already started thinking about building computer vision into your products, now is the time.

Using Pose Estimation on the Jetson Nano with alwaysAI

Many models, including those for pose estimation, may have much better performance when run on a GPU rather than a CPU. In this tutorial, we’ll cover how to run pose estimation on the Jetson Nano B01 and cover some nuances of running starter apps on this edge device.

How To Get Started with the NVIDIA Jetson TX2 on alwaysAI

The Jetson TX2 is part of NVIDIA’s line of embedded AI modules enabling super fast computation on the edge. The TX2 is a leg up compared to the Nano and will give you faster inferencing times in your AI applications. In fact, the Jetson TX2 is the fastest, most power-efficient embedded AI computing device. This 7.5 watt supercomputer on a module brings true AI computing at the edge. 

Please note: This setup guide can only be followed if you have a Linux computer. VM support is un-verified.

Getting Started with the Jetson Nano using alwaysAI

The Jetson Nano is a powerful compactly-packaged AI accelerator that allows you to run intensive models (such as the ones typically used for semantic segmentation and pose estimation) with shorter inference time, while meeting key performance requirements. The Jetson Nano also allows you to speed up lighter models, like those used for object detection, to the tune of 10-25 fps.

alwaysAI Joins NVIDIA Inception Program

alwaysAI today announced it has joined the NVIDIA Inception program, which is designed to nurture startups revolutionizing industries with advancements in AI and data sciences.