Robots at Google are learning object recognition and perception - in other words, how to think like a child. VentureBeat describes how a recently published paper, “Grasp2Vec: Learning Object Representations from Self-Supervised Grasping,“ from Google software engineer Eric Jang and Berkeley Ph.D and former Google research intern Coline Devin describes how the Grasp2Vec algorithm is helping robots observe and manipulate objects within their environment.
“In robotics, this type of … learning is actively researched because it enables robotic systems to learn without the need for large amounts of training data or manual supervision,” Jang and Devin wrote. “By using this form of self-supervision, [machines like] robots can learn to recognize … object[s] by … visual change[s] in the scene.”
In collaboration with X Robotics, their team “taught” a robotic arm that could grasp objects “unintentionally,” and over time, learn representations of various objects. Those representations made possible the “intentional grasping” of tools and toys selected by the researchers.
“Going forward, we are excited not only for what machine learning can bring to robotics by way of better perception and control, but also what robotics can bring to machine learning in new paradigms of self-supervision.”
As VentureBeat reports, using reinforcement learning, the arm gained the ability to grasp objects, examine them using its camera and answer elementary questions about object recognitions (e.g. “Do these objects match?”). Furthermore, a perception system involving the analysis of a series of three images (object before grasping, object after grasping, grasped object in isolation) added more meaningful context. The results during testing: an impressive 80 percent success rate.
“We show how robotic grasping skills can generate the data used for learning object-centric representations,” the researchers wrote. “We then can use representation learning to ‘bootstrap’ more complex skills like instance grasping, all while retaining the self-supervised learning properties of our autonomous grasping system. Going forward, we are excited not only for what machine learning can bring to robotics by way of better perception and control, but also what robotics can bring to machine learning in new paradigms of self-supervision.”