How to Get Started with the NVIDIA Jetson TX2 on alwaysAI
by Taiga Ishida | May 06, 2020 | Computer Vision | 4 minute read
by Taiga Ishida | May 06, 2020 | Computer Vision | 4 minute read
The Jetson TX2 is part of NVIDIA’s line of embedded AI modules enabling super fast computation on the edge. The TX2 is a leg up compared to the Nano and will give you faster inferencing times in your AI applications. In fact, the Jetson TX2 is the fastest, most power-efficient embedded AI computing device. This 7.5 watt supercomputer on a module brings true AI computing at the edge.
Please note: This setup guide can only be followed if you have a Linux computer. VM support is un-verified.
Monitor
Keyboard
Mouse
Linux Computer (Ubuntu Linux x64 Version 18.04 or 16.04)
NVIDIA SDK Manager installed on your Linux Computer
Configure the Jetson TX2.
Use NVIDIA SDK Manager to Flash the Device.
Take the USB Micro-B to USB A cable included in the developer kit and connect your Jetson TX2 to the Linux Computer.
Connect a Monitor, Keyboard and Mouse.
Take the AC adapter included in the developer kit and connect your TX2 to an outlet.
Put the TX2 into Force Recovery Mode, steps listed below.
Starting with the device powered off:
Press and hold down the Force Recovery button.
Press and hold down the Power button.
Release the Power button, then release the Force Recovery button.
Open up NVIDIA SDK Manager
Login using your credential from developer.nvidia.com
Step 01. Configure you’re settings to match the picture below.
4. Step 02. Accept license and continue to step 03.
5. Step 03. Enter your password and wait for components to finish downloading.
6. When a pop up opens, choose manual set up and press Flash.
7. Once the flashing is complete, keep an eye on the monitor connected to the TX2. A prompt will open up for the initial setup.
8. After you are done with the initial setup on the TX2, come back to SDK Manger and fill in the credentials to install SDK components.
9. After the process is complete, the TX2 is set up to run alwaysAI applications.
Update docker group to use Docker as non-root User on TX2
Using the alwaysAI CLI we can download the starter apps to get an app running quickly on the TX2.
We offer two starter apps specifically for Nvidia. For this guide we will be using nvidia_autonomous_vehicle_semantic_segmentation.
We need to change the Dockerfile to reference the runtime container for TX2.
Now we can run the application.
Click the link or enter http://localhost:5000 in a web browser to view our application.
Now you are set up for super fast inferencing on the edge with alwaysAI and the NVIDIA Jetson TX2!