All about gitamini's sensors
In our previous blog post we talked all about the design and manufacturing process of gitamini - if you haven’t read that one yet, go check it out! This time around, we’re talking all about gitamini’s sensors and how they allow our robots to follow seamlessly behind its leader. We spoke with Derek, PFF’s Robotics Software Engineering Manager, and asked him all about the science behind the sensors. Keep reading to hear what he had to say:
gitamini has three different sensors that allow it to detect and follow: an RGB camera, a stereo camera, and a radar. At a high level, these three sensors all work together to allow gitamini to identify its leader, follow them, and continuously track the leader’s movements to ensure the robot is following at an appropriate speed and distance. Each sensor plays a big part in allowing gitamini to operate properly and move the way that people do.
The stereo depth camera is the main sensor that tells gitamini the location of a person in the 3D space in front of the robot and has a 90 degree field of view. To complement the stereo camera, we use a radar that has a 170 degree field of view so that the robot can see more and better understand its environment. The radar sensor and stereo depth camera work together to identify the leader and create a “track” that best estimates where the leader is and will go - essentially predicting the leader’s next move.
One hurdle that the stereo depth camera runs into is that when there is a lot of direct sunlight, it can cause the sensor to fail due to high sun exposure. In this scenario, the additional data from the radar helps gitamini course correct and stay on track. “The sun causes a lot of glare which can cause gaps in the depth data, but the sun doesn't affect radar that way so it can keep tracking,” Derek said.
In addition to both the stereo depth camera and radar, the RGB camera helps distinguish people from objects that aren’t people. It also builds out a unique model of gitamini’s leader so that during autonomous movements, after the robot loses sight of its leader, it can remember the clothes that the leader was wearing and can look for a person with those colors and automatically re-pair to them and not someone else.
When asked how gitamini knows who to follow in crowded environments, Derek noted that “we’re always estimating where the leader is going to be with their next step as well as their speed, so if someone were to walk between the leader and the robot at a different speed and direction, it can recognize that and not track that person, but continue to track the original leader.”
There are a lot of factors that went into building gitamini and choosing which sensors to include in the robot for the best following capabilities. “Through testing and iteration in sunny, outdoor situations, we looked into additional sensors [in addition to stereo depth sensors] that were less susceptible to the sun, which is how we decided on including radar. Neither radar nor the stereo camera get color information from the environment so we added an RGB camera to help us distinguish people and objects via color,” Derek said when asked how we decided on the three sensors currently in our robots.
Want to learn more about gitamini or gitaplus? Follow us on social media and let us know what questions you have and who you want to hear from next!
Get the inside scoop from our engineers
A Q&A with Kevin, one of PFF’s robotics software engineers
Autonomous Behavior: Doors
How we developed this behavior and acquired its patent
Starting at $2475
Looks like your cart is empty!