The alwaysAI Blog

Taiga Ishida

Recent Posts

How To Get Started with the NVIDIA Jetson TX2 on alwaysAI

The Jetson TX2 is part of NVIDIA’s line of embedded AI modules enabling super fast computation on the edge. The TX2 is a leg up compared to the Nano and will give you faster inferencing times in your AI applications. In fact, the Jetson TX2 is the fastest, most power-efficient embedded AI computing device. This 7.5 watt supercomputer on a module brings true AI computing at the edge. 

Please note: This setup guide can only be followed if you have a Linux computer. VM support is un-verified.

How To Install alwaysAI on a Mac

The process of developing computer vision applications has been greatly simplified by alwaysAI, which now includes native support for Mac OSX (Mojave and Catalina), and enables developers to get started prototyping applications right away with very little setup required.

How to Integrate alwaysAI with External Applications Using TCP Sockets

Sockets are endpoints for inter-process communication over the network, which is supported by most platforms. Using sockets with the alwaysAI platform allows an application to communicate with external applications running locally or externally, as well as with applications written in different programming languages. There are many methods for inter-process communication, but cross-platform communication is handled best by sockets.