How We Built a Conference Booth Tracking App

alwaysAI offers a number of starter apps that make it easy to quickly deploy computer vision (CV) based applications. In this demo, I'm going to show you how to extend one of these starter apps and, hopefully, provide some insight about how you can create your own custom CV apps. The app we're going to end up with is meant to be used on a conference booth, to track the number of attendees who stop by and provide some basic metrics on how much time they spend at the booth.

alwaysAI Demo 5 - Conference Booth Analytics App

To follow along or to implement your own copy of this app you will need a compatible device with an attached camera.  For more information and a quick overview of the alwaysAI starter apps, view "How to Create and Run a Real-Time Object Detector Start App in Minutes

 

Starting with the Real-Time Object Detector Starter App

Two of our very talented developers,  Eric VanBuhler and Andres Ulloa, created one of my favorite starter apps, the realtime_object_detector. This a great starter app because it's concise (under 70 lines-of-code, half of which are comments and print statements for clarity), runs decently even on just the CPU of a RaspberryPi 3b+, and quickly demonstrates what you can build off of once you spin it up for the first time.  This app comes pre-configured to use the mobilenet_ssd model and, like all our starter apps, is capable of displaying labels and bounding boxes for each object detected through our included streamer. 

Now the mobilenet_ssd model is a relatively small model capable of picking out 20 objects including cats, dogs, people, and potted plants. It makes for a great demo for developers trying out alwaysAI for the first time at home or an office but, as we later discovered at a conference, is less interesting in an environment with lots of attendees who, sadly, aren't carrying around pets or plants.

Though running this demo at our booth still captured the interest of a lot of people, one of the number one questions we got throughout the day was: "What can I build with your platform?" — a question that ideally would have been answered by our demo.

So after the first day of this two-day conference, we quickly realized our demo could use an upgrade. So that night I went back to my hotel room and started coding away.  

 

How to Customize the Starter Applications

Now, we have an existing face_counter starter application that uses the facial recognition model mobilenet_ssd_face and a centroid tracker to find and keep track of how many faces it's seen.

Booth Demo - 2

 

As you can see here the code to make this app is pretty trim as well because of alwaysAI's edgeIQ Python library, which abstracts away a lot of the complexity required to run a CV application.

 

Screen Shot 2019-11-19 at 1.11.54 PM

You can see on line 50 where an object_id has been assigned for each person detected.

This number increments for every 'new' face it sees. We're simply taking this id and displaying it to the streamer. Now as a point of clarification, this model can identify a human face but can't distinguish one from another. So if a detected person walks out of a camera's frame for a few seconds then re-enters, they'll be counted as a new face. 

But that's okay — the app we want to build just needs to track how much time a person spent in front of our booth per visit, then aggregate that time so we can display the following metrics: total number of visits, total time, average time, and longest time any single individual was at the booth.

 

Creating a More Robust Tracking App

To do this, the first thing we did was swap out the mobilenet_ssd_face for the mobilenet_ssd model because the face detection model works best if someone is looking straight and level at the camera. In a fast-moving conference setting, people could be looking anywhere. The mobilenet_ssd model detects people by their entire shape and will count someone even if their face or body are partially occluded by another person or by particularly fancy headwear.

Booth Demo - 3

Booth Demo - 4

 

The next thing we did was to use the edgeIq filter_by_label function to ignore anything other than objects labeled 'person,' in case someone actually did bring a cat to the sponsor hall.

 

Booth Demo - 6

Booth Demo - 7

Booth Demo - 8

 

Next, I created a metrics_manager.py file that maintained a dictionary of all users where their object_id was the key and the amount of time during which they were detected at the booth was used as the value.

 

Booth Demo - 9

 

This simple schema doesn't allow for retracing exact times for when visits occurred, but it works for our use case. The main app then calls this file's currentMetrics() function which calculates the total, average, and max time data from this dictionary.

 

Booth Demo - 10

Booth Demo - 11

Booth Demo - 12

Booth Demo - 13

 

This data is then passed to our Streamer webpage for display.

 

Booth Demo - 14

 

And that's it!  Just deploy and start the application through our CLI tool onto an edge device with a webcam and now you have your own real-time foot traffic tracking app.

 

Booth Demo - 5

 

Note that in the above screenshot there are several 'ghost' people bounding boxes. These are residual tracks from people who have just exited the frame and the system is waiting a second in case they reappear.

To get the code for this project, go to:

https://github.com/alwaysai/people-counter

To download the necessary models and get access to our starter apps, you'll need an active alwaysAI account.  Sign up below if you haven't already.

We are providing professional developers with a simple and easy-to-use platform to build and deploy computer vision applications on embedded devices. Sign up for the alwaysAI Private Beta program now!

APPLY FOR BETA