Vision Processing
The FIRST Robotics Competition theme for the 2016-2017 season is headlined Steamworks. In the competition, robots are given the goal to gather gears off of the ground, deposit them onto wooden pegs on their team's airship, shoot balls into corner towers, and climb up a rope at the end of the match. The competition has two modes of operation: autonomous and teleop. During the 15-second autonomous period, robots can only be operated through code. Students are not permitted to touch any controllers or joysticks to steer their robots. During the teleop period, students are free to operate their robots through joystick controllers or arcade sticks.
My robotics team only has 3 main programmers, and the 2 other programmers decided to work on the drive/functional portion of the robot code. I chose to take on vision processing, which would help my team's robot to intake images from an USB camera strapped onto the robot, run some algorithms, and drive the robot towards the intended target during the autonomous period. Below is an overview of this year's game.
My robotics team only has 3 main programmers, and the 2 other programmers decided to work on the drive/functional portion of the robot code. I chose to take on vision processing, which would help my team's robot to intake images from an USB camera strapped onto the robot, run some algorithms, and drive the robot towards the intended target during the autonomous period. Below is an overview of this year's game.
The Initial Stages of Computer Vision
Below are original images that the USB camera intakes. The camera has a green LED ring light hot glued around it, which allows it to better detect the reflective pieces of tape shown on the board.
How does vision processing work?
OpenCV (Open Source Computer Vision) is a real-time computer vision took developed by Intel. It is a library of programming functions that can be incorporated in Eclipse, the programming IDE that I used with Java code, in order to interpret images. OpenCV is able to identify objects and understand motion, which are both important functions that my team's robot is required to do.
- Use camera to intake image; resize frame to 640x480 pixels.
- Convert BGR color values to HSV color values.
- Filter color algorithm to detect the reflective tape; find groups of reflective tape.
- Identify the biggest two groups.
- Match the template of the groups to the ideal two-rectangular group that was determined in the code.
- Size culling; remove small groups that are under a determined pixel threshold.
- Dilate the white areas in the image so that the original interpreted camera images look bigger.
- Erode white areas in the remaining groups.
- Randomly assign colors to the two biggest groups, which differentiates them apart.
- Find all pixels of the two largest rectangular groups.
- Count all non-black pixels.
- Construct a template image that helps the computer realize what the intended target is. In this case, it is the two rectangular pieces of reflective tape.
- Calculate the centroids of the two groups of reflective tape; this will help center the robot when the camera is sweeping to look for the tape.
- Bitwise_and the images. This is an arithmetic operation that helps extract the two large rectangular targets from the rest of the image. Next, we calculate the result.
- Calculate the template match percentage. If it matches, then the robot will move forward towards the image.
- Measure length from the camera to the intended target. To do this, my friend held a board at certain lengths from the camera and I inputted the distances into a lookup table (it saves the distances in the code).
- Find the centers of mass of the images.
- Use the distance lookup table that was created in the first step.
- Calculate distance between groups in pixels. The farther the pixels are, the closer the robot is to the target.
- Convert distance to feet.
Demonstration - Houston Lone Star North Regional
I am on the drive team, so I am usually on the field during our competition matches. However, I had some matches when I switched off with a teammate, so I filmed this video of one of our matches. In the beginning of the video, you can see robots drive up to the airships, where some of them deposit a gear. My team (5417 in the center blue position on the right side of the video) went for the center gear peg this match, and the computer algorithms quickly analyzed the image to deliver a perfect gear deposit on the center peg. We won this quarterfinal match, 447-255.
END PRODUCT / REFLECTION
In reflection, over 100 hours went into this project. Learning about computer vision was much more difficult that I had imagined it to be (probably 100000000000000000x more difficult). I started this project in late January and finished in late March. However, I still had to edit the code at regional competitions, as the color values for the tape were different at every competition we went to due to lighting. Last year, I was new to programming a robot, so I learned from my mentors and peers how to program in LabView. This year, I wanted to work on computer vision since it was an important part of the game, and my two software mentors helped me through the process of learning Java and FRC/National Instruments tools. Although it was possible for my team's robot to get by without vision processing, it soon became clear that it was useful to have a robot that could calculate images and deposit a gear during the autonomous period. Having at least one gear deposit per alliance (3 robots competing on the same side) during the autonomous period became a defining factor in which side would win a match. My team's combined drive/functional code and computer vision processing helped Gerald (my team's robot) reach the final round of playoffs at the 2017 Dallas Regional Competition, where we won the competition and will advance to the FRC World Championship in late April.
Update: Eagle Robotics was on the 7th place alliance at the Lone Star Central Regional, and made it to the quarterfinals. We were 5th place alliance captains at the Lone Star North Regional, and made it to the semifinals. At this competition, we were 2 seconds from having two robots climb, thus making us so close (yet so far) from achieving a 503-point match, one of the highest scores in this year's game history. We are now working on incorporating computer vision with our shooter, which will be able to shoot balls into the corner ball towers during the autonomous period.
Update: Eagle Robotics was on the 7th place alliance at the Lone Star Central Regional, and made it to the quarterfinals. We were 5th place alliance captains at the Lone Star North Regional, and made it to the semifinals. At this competition, we were 2 seconds from having two robots climb, thus making us so close (yet so far) from achieving a 503-point match, one of the highest scores in this year's game history. We are now working on incorporating computer vision with our shooter, which will be able to shoot balls into the corner ball towers during the autonomous period.