How To Get Started with the NVIDIA Jetson TX2 on alwaysAI

The Jetson TX2 is part of NVIDIA’s line of embedded AI modules enabling super fast computation on the edge. The TX2 is a leg up compared to the Nano and will give you faster inferencing times in your AI applications. In fact, the Jetson TX2 is the fastest, most power-efficient embedded AI computing device. This 7.5 watt supercomputer on a module brings true AI computing at the edge. 

Please note: This setup guide can only be followed if you have a Linux computer. VM support is un-verified.

What you will need:

 

How To Flash The Device

  1. Configure the TX2.

  2. Use NVIDIA SDK Manager to Flash the Device.

 

Configuring the TX2

  • Take the USB Micro-B to USB A cable included in the developer kit and connect your TX2 to the Linux Computer.

  • Connect a Monitor, Keyboard and Mouse.

  • Take the AC adapter included in the developer kit and connect your TX2 to an outlet.

  • Put the TX2 into Force Recovery Mode, steps listed below.

 

TX2 Force Recovery Mode

Starting with the device powered off:

  1. Press and hold down the Force Recovery button.

  2. Press and hold down the Power button.

  3. Release the Power button, then release the Force Recovery button.

 

Flashing the Device

  1. Open up NVIDIA SDK Manager

  2. Login using your credential from developer.nvidia.com

  3. Step 01. Configure you’re settings to match the picture below.

tx2

4. Step 02. Accept license and continue to step 03.

Screen Shot 2020-05-01 at 12.10.08 PM

5. Step 03. Enter your password and wait for components to finish downloading.

prompt

Screen Shot 2020-05-01 at 12.13.09 PM

6. When a pop up opens, choose manual set up and press Flash.

Screen Shot 2020-05-01 at 12.22.28 PM

7. Once the flashing is complete, keep an eye on the monitor connected to the TX2. A prompt will open up for the initial setup.

8. After you are done with the initial setup on the TX2, come back to SDK Manger and fill in the credentials to install SDK components.

Screen Shot 2020-05-01 at 12.22.28 PM-1

9. After the process is complete, the TX2 is set up to run alwaysAI applications.

 

Running alwaysAI Applications on TX2

Requisites:

Screen Shot 2020-05-06 at 11.41.19 AM

Using the alwaysAI CLI we can download the starter apps to get an app running quickly on the TX2.

Screen Shot 2020-05-06 at 11.49.25 AM

We offer two starter apps specifically for Nvidia. For this guide we will be using nvidia_autonomous_vehicle_semantic_segmentation.

Screen Shot 2020-05-06 at 11.53.52 AM

We need to change the Dockerfile to reference the runtime container for TX2.

Screen Shot 2020-05-06 at 11.54.54 AM

Now we can run the application.

Screen Shot 2020-05-06 at 11.56.34 AM

Click the link or enter http://localhost:5000 in a web browser to view our application.

 Screen Shot 2020-05-01 at 2.33.34 PM

Now you are set up for super fast inferencing on the edge with alwaysAI and the NVIDIA Jetson TX2!

 

Get started now

We provide developers with a simple and easy-to-use platform to build and deploy computer vision applications on edge devices.

Sign Up for Free