Using alwaysAI's Model Training Tool to Build a License Plate Tracker

In this tutorial, we’ll cover how to create your own license plate tracker using the new license plate detection model, which was created using alwaysAI’s model training tool.

If you want to read more about how the license plate detection model was built, read this blog! To read more about model training in general, you can visit our model training overview article.

If you’re interested in training your own object detection model, you can fill out this survey to sign up for our model training beta program. You can also find a subset of the dataset used in the creation ‘alwaysai/vehicle_license_mobilenet_ssd’ model here, if you would like to train your own license plate detector and test it out using the example app we'll build in this tutorial!

To complete the tutorial, you must have:

  1. An alwaysAI account (it’s free!)
  2. alwaysAI set up on your machine (also free)
  3. A text editor such as sublime or an IDE such as PyCharm, both of which offer free versions, or whatever else you prefer to code in

Please see the alwaysAI blog for more background on computer vision, developing models, how to change models, and more. The finished code from this tutorial is available on GitHub.

Let’s get started!

We’ll break down this tutorial in two parts:

  1. Set up
  2. App.py

Set Up

In this tutorial, we’ll build the app from scratch. After you've signed up and logged in, go to https://alwaysai.co and navigate to your Dashboard. You can follow the steps outlined here to create a new project. For this app, you’ll want to choose ‘create a project from scratch’. Once your project is created on the Dashboard, scroll down to the ‘Models’ section and click the ‘+’ sign to add a model. Browse the model category to find “alwaysai/vehicle_license_mobilenet_ssd”, which will be in the ‘object detection’ models. Add this model to your project by clicking on it, and selecting ‘add to project’ and choosing your project from the drop down menu. Navigate back to your project Dashboard to get the configuration hash code and then finish configuring your project locally as described in the documentation page.

Finally, create a folder inside the directory that app.py is in named ‘video’. This is where we’ll store the input videos. You can find some sample videos in the GitHub repository, along with the completed code for this tutorial.

App.py

Now, we’ll add the content to app.py. There are a few different moving parts here, but we’ll walk through the code top to bottom and explain them as we go!

First, make sure the following lines are at the top of app.py:

import time
import edgeiq

Then, replace the contents of ‘main()’ with the following:

def main():
    # The current frame index
   frame_idx = 0

   # The number of frames to skip before running detector
   detect_period = 30

   obj_detect = edgeiq.ObjectDetection(
           "alwaysai/vehicle_license_mobilenet_ssd")
   obj_detect.load(engine=edgeiq.Engine.DNN)

   print("Loaded model:\n{}\n".format(obj_detect.model_id))
   print("Engine: {}".format(obj_detect.engine))
   print("Accelerator: {}\n".format(obj_detect.accelerator))
   print("Labels:\n{}\n".format(obj_detect.labels))

   tracker = edgeiq.CorrelationTracker(max_objects=5)

   fps = edgeiq.FPS()

What we’ve done here is pretty standard if you’ve used any of alwaysAI’s starter or example apps before: we’re just setting up the main method, creating an object detector using the alwaysai/vehicle_license_mobilenet_ssd model, and we’re printing the object detector’s configuration to the console. 

We’re also creating three important variables: frame_idx, detect_period, and tracker.

The variable tracker is the correlation tracker object. Using a tracker, such as the correlation tracker in the edge IQ library, reduces the CPU usage and inference time. 

The variable frame_idx tracks how many iterations we’ve done, and detect_period defines how often we’ll perform object detection. If the frame count is not evenly divisible by 30, we instead check if the tracker is currently tracking any objects (using ‘count’), and if so, we set the ‘predictions’ variable to be the tracked predictions. To reduce overhead, increase detect_period, and vice versa.

Next, copy all the code contents directly under what you just added. This instantiates a 'try' block, along with a ‘finally’ counterpart, which will always be executed regardless of the ‘try’ execution. 

    try:
        # blank for now, will fill in later!

    finally:
        fps.stop()
streamer.close()
        print("elapsed time: {:.2f}".format(fps.get_elapsed_seconds()))
        print("approx. FPS: {:.2f}".format(fps.compute_fps()))

        print("Program Ending")

Now that the configuration is done and we have a skeleton of an app to work with, we’ll fill in the object tracking and file import portions. All of the rest of the code will go into the ‘try’ block we created in the last step.

Inside the ‘try’ block, paste the following code:

        video_paths = edgeiq.list_files(base_path="./video/", valid_exts=".mp4")
streamer = edgeiq.Streamer().setup()


        for video_path in video_paths:
                with edgeiq.FileVideoStream(video_path) as video_stream:

This code uses the edge IQ command to get a list of all the files you store in the ‘video’ folder that have the file extension ‘mp4’. Then, it iterates over each of the file paths in that returned list and runs the nested code on each, which we’ll cover in the following section.

Add in as many .mp4 files to the ‘video’ folder as you would like to test your new app on!

Next, we’ll add in the tracking logic. For every file in the ‘video’ folder, we’ll use that video as an input stream and detect and track license plates and vehicles. 

NOTE: the variable called ‘predictions’ that is defined at the beginning of the ‘while’ loop in the code below is a list that will be used to store the predictions sent to the streamer. It will be updated every ‘detect_period’, otherwise it will hold the ‘tracker’ predictions.

Add the following code directly under the for loop statement initializing the streamer:

                # Allow Webcam to warm up
                time.sleep(2.0)
                fps.start()

                # loop detection
               while video_stream.more():

                    frame = video_stream.read()
                    predictions = []

                    # if using new detections, update 'predictions'
                    if frame_idx % detect_period == 0:
                        results = obj_detect.detect_objects(frame, confidence_level=.5)

                        # Generate text to display on streamer
                        text = ["Model: {}".format(obj_detect.model_id)]
                        text.append(
                                "Inference time: {:1.3f} s".format(results.duration))
                        text.append("Objects:")
                        # Stop tracking old objects
                        if tracker.count:
                            tracker.stop_all()

                        # Set predictions to the new predictions
                        predictions = results.predictions

                        if not predictions:
                            text.append("no predictions")

                        # use 'number' to identify unique objects
                        number = 0
                        for prediction in predictions:
                            number = number + 1
                            text.append("{}_{}: {:2.2f}%".format(
                               prediction.label, number, prediction.confidence * 100))
                            tracker.start(frame, prediction)

                        else:
                        # otherwise, set 'predictions' to the predictions stored in the correlation tracker object
                            if tracker.count:
                                predictions = tracker.update(frame)

                        # either way, use 'predictions' to mark up the image and update text
                        frame = edgeiq.markup_image(
                              frame, predictions, show_labels=True, show_confidences=False, colors=obj_detect.colors)
                        streamer.send_data(frame, text)
                        frame_idx += 1

                        fps.update()

                        if streamer.check_exit():
                            break

That’s it! 

You can now build the app and start it (see this blog if you need assistance), and you should see output similar to that shown below

Fill out this survey to sign up for our model training beta program and build your own license plate detection model! You can use either our freely available, ready-made datasets, found here, or a larger version here,  to build your own model that you can test out using the example app you've just built!

Contributions to the article made by Todd Gleed and Jason Koo

Get started now

We are providing professional developers with a simple and easy-to-use platform to build and deploy computer vision applications on edge devices. 

Sign Up for Free