Getting Started with the Jetson Nano using alwaysAI

The Jetson Nano is a powerful compactly-packaged AI accelerator that allows you to run intensive models (such as the ones typically used for semantic segmentation and pose estimation) with shorter inference time, while meeting key performance requirements. The Jetson Nano also allows you to speed up lighter models, like those used for object detection, to the tune of 10-25 fps.

NVIDIA Jetson Nano Object Detector

The Jetson Nano Developer Kit

While NVIDIA sells the Jetson Nano developer kit directly, it doesn't come with the power supply or SD card needed to start using it. It also doesn’t come with the standard keyboard, mouse, and monitor typically needed to boot up edge devices for the first time. In this guide we’ll direct you on how to set up your Jetson Nano developer kit.

Things you will need:

Flashing the SD Card

You can find a link to the Jetpack image that has the drivers alwaysAI is expecting here. This image can then be flashed on the SD card using balena Etcher. Run Etcher, select the image downloaded from the link above, and insert your SD card. Then hit the flash button to start the process.

image-20200123-235333

 

Once the image is done flashing, pop it into the microSD slot found on the back underside of the removable Nano module.

sd-card

 

Enabling the Power Supply

Although the Jetson Nano allows for USB power, I recommend getting a 4 amp rated barrel-jack power supply to get the best performance from the board. Before using the barrel-jack power supply, you first need to enable it by placing a jumper on the pins labeled J48 (located directly in front of the barrel-jack port). Next, plug in your power supply and you should see a green LED light indicating that the Nano is powered on.

barrel-jack

 

Prerequisites for awaysAI

Since I already have an account and alwaysAI installed on my laptop, I’ll skip to the other two prerequisites, but just follow the links above to register for an account and download the alwaysAI CLI.

Network Connectivity

The Jetson Nano Developer Kit doesn’t include a WiFi module, so you have two options. You can either connect your Jetson Nano directly to your laptop using an ethernet cable and then set up a static IP and share your network, or you can add a USB WiFi adapter and connect the Nano to the same WiFi network that your laptop is using. Here we’ll be using a USB WiFi adapter.

Connect through WiFi

Plug in a keyboard, mouse, monitor, and WiFi adapter into the Jetson Nano and power it on. It will eventually load the Ubuntu setup wizard. During the Ubuntu setup wizard steps, remember to keep track of the computer name you set (I named mine “nano01”), since you will need this in the app deploy step. The Jetson Nano should automatically recognize the WiFi adapter and show the standard WiFi icon on the top right corner of the menu bar. Clicking on that icon, selecting the network your laptop is connected to, and providing the network credentials should allow the Jetson Nano to access the internet, and your laptop to access the board. You can test this by pinging the board using your laptop — just open up a terminal and enter the following:

ping nano01
image-20200128-183402

 

If you get a response without a timeout error, this indicates that your laptop can connect to the Jetson Nano.

Docker

An important prerequisite for using alwaysAI is having Docker installed on your edge device. Although the image provided by NVIDIA already has Docker installed, it doesn't allow you to run it without sudo. To fix this, let's add your user to the Docker user group:

sudo usermod -aG docker $USER

Log out and then log back in, or reboot the Nano to let the changes take effect, and you should be able to use Docker without needing sudo.

Now you should have all the software prerequisites to run alwaysAI in a remote development configuration.

Running Applications on the Nano

AlwaysAI provides starter apps which are convenient starting points for building your own computer vision applications. Once you have the alwaysAI CLI installed, download and view the latest starter apps using:

aai get-starter-apps
ls

You should see two starter apps called nvidia_autonomous_vehicle_semantic_segmentation and nvidia_realtime_object_detector, which are starter apps formatted to be run on the Jetson Nano. (The major difference in these starter apps as compared to the others is the use of a “Nano” base image in the Dockerfile.) Here are the contents of the Dockerfile in each NVIDIA app:

FROM alwaysai/edgeiq:nano-0.11.0

Additionally, these apps use the DNN_CUDA backend:

semantic_segmentation.load(engine=edgeiq.Engine.DNN_CUDA)

NVIDIA Autonomous Vehicle Semantic Segmentation

This app loads dash cam footage from a car driving through Toronto with a CV model trained on the Cityscapes dataset. It classifies each pixel in each frame as belonging to a category associated with cities, such as a car, road, or building. If you run the starter app, you’ll see the Nano perform semantic segmentation inferencing on the video as if it were an actual live camera feed.

# Run aai app deploy to move your app onto your Nano
aai app deploy
✔ Target configuration not found. Do you want to create it now? … yes
✔ What is the destination? › Remote device
✔ Found Dockerfile
✔ Please enter the hostname (with optional user name) to connect to your device via ssh (e.g. "pi@1.2.3.4"): … alwaysai@nano01
✔ Connect by SSH
✔ Check docker executable
✔ Check docker permissions
✔ Would you like to use the default installation directory "alwaysai/nvidia_autonomous_vehicle_semantic_segmentation"? … yes
✔ Create target directory
✔ Write alwaysai.target.json
✔ Copy application to target
✔ Build docker image
✔ Install model alwaysai/enet
✔ Found python virtual environment



# Run aai app start to run the starter app
aai app start
Loaded model:
alwaysai/enet

Engine: Engine.DNN_CUDA
Accelerator: Accelerator.NVIDIA

Labels:
['Unlabeled', 'Road', 'Sidewalk', 'Building', 'Wall', 'Fence', 'Pole', 'TrafficLight', 'TrafficSign', 'Vegetation', 'Terrain', 'Sky', 'Person', 'Rider', 'Car', 'Truck', 'Bus', 'Train', 'Motorcycle', 'Bicycle']

[INFO] Streamer started at http://localhost:5000

Click the link (or copy and paste it into your browser) from your terminal to open up the alwaysAI Streamer, which is a tool that can be used to output image data for visual debugging. If you’re curious about it, check out the app.py file to see how you can load any image and text into the Streamer to help you during development.

image-20200130-191827

Using the Streamer, you can see that the app is classifying pixels in the image as road, person, sidewalk, and building. By using the spacial and location information semantic segmentation returns, and combining it with decision-making based on this information, you can create applications that navigate autonomously.

NVIDIA Real-time Object Detector

This starter app uses the webcam attached to your Jetson Nano to perform object detection, which typically takes 431ms per inference on our Inforce GPU, and accelerates it to run at 72ms per inference on the Nano. That’s close to around 14 fps! Using the power of the Jetson Nano, we can run highly accurate object detection at rates that allow for decisions to be made quickly, which can be critical when these decisions are based on fast-moving objects.

image-20200130-234255

 

Conclusion

The Jetson Nano may just be the platform of choice when considering its small form factor, price point, and its ability to improve the performance of models. It accelerates many models, and is most suitable for high performance, mountable systems with a stable power supply. Although it takes a few pieces of standard hardware and a bit of setup to get started, overall it's one of the best accelerated SBC options on the market.

 

Get started now

We are providing professional developers with a simple and easy-to-use platform to build and deploy computer vision applications on edge devices. 

Get started