Christian Piper is a 16-year-old high school student from Pennsylvania, who equipped his First Robotics Competition robot with machine learning sight. He did this with the alwaysAI platform.
The First Robotics Competition is the world’s landmark robotics competition for students to test their knowledge of computer science and robotics. The FRC game rules and manual come out every year at the beginning of January. Given a strict rule set, limited time (traditionally a six-week period) and resources, the competitors have to design, build, program and test a robot for the game. Christian wants to be able to share what he did with computer vision and the alwaysAI platform with future FRC participants so that they can have access to machine learning computer vision too. Currently, ML computer vision is out of the scope of knowledge and resources for the majority of students competing in the FRC. In the interview that follows, we explore his plans for how he will implement ML computer vision in his team’s robot and ask him about how he got involved with robotics, FRC and the alwaysAI platform.
Stephanie: How did you get involved with robotics and the FRC at such a young age?
Christian: I’ve always been interested in robotics, starting in middle school. When I was in STEM class, I found the robotics component moved too slow. I had already been working with Arduino and Raspberry Pi, so when the class got that point, I was already ahead. Once I was in high school and old enough to participate in the FRC, I already had some experience with robotics and I wanted to put my skills to the test and challenge myself, so I joined the FRC team at my high school. At this point, I was really excited to try and get robots to perform more complicated tasks than what was being taught in STEM class.
Stephanie: What led you to discover alwaysAI and use it for your FRC robot?
Christian: I wanted to equip our robot with machine learning vision, so I started to do some research online. Several of the team members and I attempted a simple vision system for the first time last year which relied on colors. We ran into issues with this type of system which we didn’t have time to work through at that point.
The robot was unresponsive while processing images at the same time, which made it impractical for the FRC game. For this reason, we had to give up on vision altogether for the 2019 game. Once the 2019 season was finished, I wanted to revisit computer vision and started doing more research, this time on machine learning and deep learning computer vision. I thought, if I’ve got the summer to work on it, I might as well go big or go home. This is when I came across alwaysAI. At that point I hadn’t found anything else like the alwaysAI platform, handling everything from the implementation process to image processing management.
Stephanie: What about the other FRC teams, have any of them been able to implement computer vision in any way?
Christian: I noticed other teams attempt machine learning computer vision, but mostly unsuccessfully. I also noticed the teams use a color or luminance based style of computer vision which prevents them from being able to execute anything as sophisticated as what is possible with machine learning systems. What most teams are doing is looking for a luminance and color value or a brightness value of the object, so categorization has to be returned manually with sliders when the robot changes environments.
I would say that out of the thousands of teams in the FRC, only about 20 of them have been able to implement machine learning computer vision. Their projects were well done, but the complication of implementing this type of vision reliably has restricted the use of it to teams who have access to engineers with experience in the field. Without that kind of help, FRC teams have difficulty implementing computer vision as there is a very steep learning curve and its a long, complicated process. However, with alwaysAI, it wasn’t nearly as complicated, nor resource-intensive, making it possible for our FRC 2020 robot to have machine learning vision.
Stephanie: Would you say that discovering and using the alwaysAI platform put your team at an advantage for this year's game?
Christian: Definitely. I believe the robots will be able to perform the challenges much more effectively with vision, which makes it more likely to do well. The resources required to pull off machine learning vision on a robot without alwaysAI make it out of reach for the vast majority of teams. However, instead of using alwaysAI to keep our team at an advantage, I would actually like to use it to level the playing field. Machine learning computer vision is a highly valuable asset and if there is a way to make it accessible to all of the teams with a platform like alwaysAI, then my team and I want to help make that possible.
Stephanie: That is great! I know that you have been working with alwaysAI Senior Software Engineer, Eric VanBuhler, and Director of Computer Vision, Vikram Gupta, to make your model available publicly so other teams can also have access to it and build off it in future competitions. On that note, can you tell us about this model you have been working on?
Christian: Once I started working with the alwaysAI platform, I realized that I needed to retrain one of the object detection models to be able to identify the very specific items required for the FRC game. This is when I got in touch with Vik and Eric. This was around August 2019, and at the time the documentation was limited and there weren’t many tutorials available either.
Eric and Vik helped me understand the features of the platform and helped me retrain a model based on the 2019 FRC game as a test. This model tracks an orange kickball and a disk, which were the game pieces for 2019. The second model is meant for this year’s game (2020 Infinite Recharge) and tracks seven-inch dodgeballs (yellow, foam core, gator skin) and the target/goal, which is very specific to the FRC and this year’s competition.
The function of vision is to track the balls to make them easier to pick up and then from there, track the goal and align the robot to the goal and shoot. The robot has to shoot from anywhere between 5-30 feet away, and it is difficult for a human driver to line up. Because of this, it is highly beneficial to have the robot lined up automatically, which is where the machine learning computer vision component becomes essential. We are now working on having a second model publicly available, which will be based on the 2020 FRC challenge.
Stephanie: What did you enjoy most about working with the alwaysAI platform?
Christian: I like how easy it is. You put in a couple of lines of code, and it works. Machine learning computer vision is notoriously difficult to implement and is known for taking more than a few lines of code to get an app running. It feels like a big part of a very complicated process is handled for me, which is why my team and I are able to use it as a part of this project, where there are so many components and limited resources and time. Without alwaysAI, machine learning vision simply wouldn’t have been possible for our team’s robot. The only major issue was that the objects in the FRC game are so specific, but being able to retrain the existing models made it possible nonetheless. Vik and Eric were a huge help in that area. Another huge benefit of alwaysAI was access to the team members and their knowledge and willingness to help.
Christian plans on competing in the FRC until he finishes high school. He wants to study artificial intelligence and robotics in college and build on the knowledge he has accumulated so far. It was a pleasure talking to him, and hearing his insights about his experience with robotics and the alwaysAI platform. He is pioneering machine learning computer vision for all FRC teams to use, which is huge! We are happy to help on this journey in any way we can. We look forward to seeing what else he comes up with.
The FRC team that Christian is a part of is called SparTechs (Team 834) and they attend Souther Lehigh High School in Pennsylvania.