Hacky Hour 24: Open Office Hour 1/28/21

Hacky Hour: Open Office Hour

This week at Hacky Hour, alwaysAI’s CTO, Steve Griset, and Developer Advocate and Software Engineer, Lila Mullany hosted another round of Open Office Hour. This Hacky Hour format is perfect for both beginners and experienced alwaysAI developers. We are here to ensure the success of our community. In these sessions, our community can ask us questions about setting up an edge device, building a CV application, or general inquiries related to computer vision and alwaysAI!

Starter Apps

During the Open Office Hour, Steve demonstrated how to get started with alwaysAI's starter apps. Starter apps are an easy way to jumpstart a computer vision application, and they demonstrate how to use alwaysAI's APIs, accelerators, and peripherals. Developers of alwaysAI can create starter applications from the web or directly from the Command Line Interface (CLI).  alwaysAI supports the following starter apps: 

Classification Starter Apps
  • Image Classifier - demonstrates how to use the classifier APIs with a set of images

  • Age and Gender Classifier - demonstrates using two classifier machine learning models


Object Detection Starter Apps
  • Hello World - demonstrates how to use object detection APIs with an image set

  • Object Detector - demonstrates how to use object detection APIs with an image set

  • Realtime Object Detector - demonstrates how to use object detection APIs with an webcam

  • Realtime Facial Detector - demonstrates how to use object detection APIs to do facial detection with webcam

  • NVIDIA Realtime Object Detector - demonstrates how to use object detection APIs using the CUDA accelerator with an webcam (Jetson Product Line)

  • Realsense Object Detector - demonstrates how to use object detection & RealSense APIs with the RealSense camera (returns distance to objects)

  • Simple Object Counter - demonstrates how to use object detection & filtering APIs to do specific object counting with webcam

Semantic Segmentation Starter Apps
  • Semantic Segmentation Cityscape - demonstrates how to use the semantic  segmentation APIs with an set of images

  • Semantic Segmentation VOC - demonstrates how to use the semantic segmentation APIs with an set of images

  • NVIDIA Autonomous Vehicle Semantic Segmentation - demonstrates how to use the semantic segmentation APIs and CUDA accelerator with an video stream  (Driving in Toronto)

Tracker and Object Detectors
  • Face Counter - demonstrates how to use the centroid tracker APIs with object detector and webcam
  • Detector Tracker - demonstrates how to use the correlation tracker APIs with object detector and webcam
 
Pose Estimation
  • Realtime Pose Estimator - demonstrates how to use the human skeleton pose estimator APIs with webcam 

 

Guest Questions


QUESTION: How do you get the CLI working for the Jetson Nano, does the CLI only work for windows?

ANSWER (Andres): The CLI works cross platform, so you can install it on the nano using the linux setup guide. You can also deploy to your Nano meaning you can do your work in Windows which will then be run on the Nano.


QUESTION: How can we display multiple streams of different cameras to same GUI web interface ?

ANSWER (Lila): You can create multiple input streams (like one video streamer and one webcam, or multiple webcams) and then you can read in the frames for each, e.g. image1 = file_reader.read(), image2 = cam_reader.read(). You can then perform inference on the frames, mark them up if you want with predictions, and then concatenate frames (example in this blog: https://alwaysai.co/blog/using-multiple-object-detection-models) and send both output to the output streamer.


QUESTION: How can I access my edge device and its GUI web interface (localhost) if my edge devices is deployed or placed in different cities?

ANSWER (Steve): The common way to do this is with RTSP, Flask Server, 0mQ. Join us next week to learn How to Build an Interactive Web Application with alwaysAI.



QUESTION: Is it possible to transfer data security via SSH? 

Answer (Lila): Yes, the Image Capture Dashboard allows you to capture data from the edge device to your local device. You can also work this into a GUI that uses the Raspberry PI IP to copy the files automatically. You can learn more about the Image Capture Dashboard here.


QUESTION: If you're running PI or windows, will you need to use Docker? 

Answer (Steve): If you're deploying the application onto an edge device, you'll need to use Docker. However, developers will not need Docker to deploy locally on MAC or Windows devices. Note: Docker comes preinstalled with Jetson devices. 


QUESTION: How would you go about developing a solution to solve a Rubik's cube with alwaysAI? 

Answer (AB): This needs to be solved in a controlled environment, considering the lighting conditions etc. Developers will need to convert RGB values to HSV values and then follow an algorithm. This will be discussed over time on alwaysAI's discord channel


QUESTION: Does alwaysAI support the Intel accelerator? 

Answer (Steve): We support the NCS2, and the Myriad Processor. In the future, we also plan to add support for the Oak-D


QUESTION: Can a single edge device such as a Jetson Nano or the Raspberry Pi support multiple cameras ?

Answer (Steve): The recommendation is to use one camera per edge device, however, we have seen up to two cameras supported by a single edge device. 


QUESTION: What Edge device do you recommend alwaysAI users start out with? 

Answer (Steve): My recommendation is to start out with the NVIDIA Jetson Nano. For production deployments, I recommend the NVIDIA Jetson Xavier NX.

See below for the full video of the Hacky Hour, or click here.

Open Office Hour

Join us every Thursday at 10:30 AM PST for weekly Hacky Hour! Whether you are new to the community or an experienced user of alwaysAI, you are welcome to join, ask questions, and provide the community with information about what you're working on. Register here.

Get started now

We are providing professional developers with a simple and easy-to-use platform to build and deploy computer vision applications on edge devices. 

Sign Up for Free