Robots are good at a lot of things, but their track record at picking up objects is poor. So just how hard is it to teach one to pick up an object on demand from a table full of clutter?
That’s what a team from Carnegie Mellon University have been trying to find out. Pictured here is Baxter: a modern two-armed industrial robot that usually performs repetitive task in factories. But instead of having it do its usual business, the researchers decided to try and get it to work out how to pick up objects in a more unstructured environment — otherwise known as a table full of junk.
Baxter has two ‘fingers’ at the end of each arm that can be pinched together to hold an object, along with a high-res camera which is used to see what’s happening. It also has a Kinect up top to gain a broad overviews of what’s in front of it. Technology Review explains how the researchers got Baxter to learn to pick things up:
[The team] programmed Baxter to grasp an object by isolating it from its neighbours, then to pick a random point in that area, rotate the grippers to a certain angle and then move the grippers down vertically to attempt a grasp. The robot then lifts its arm and determines whether the grasp has been successful using force sensors. For each point, it repeats the grasping process 188 times, each time after rotating the gripping angle by 10 degrees. To allow the robot to learn, Pinto and Gupta placed a variety of objects on the table in front of Baxter, and simply left it for up to 10 hours a day, without human intervention.
Baxter was fortunate enough to have a neural network that had already been trained to recognise objects from images, but that was all. Over time, then — in fact it was 700 hours, to make 50,000 grasps on 150 different objects — the robot slowly learned to pick things up from a table littered with toys and household objects.
Now, Baxter can pick up an object it’s seen before 73 per cent of the time, and an object it’s never seen before 66 per cent of the time. It can also work out if it should back itself to be able to pick something up, predicting if it will be able to grasp an object with an accuracy of 80 per cent just by looking at it. The results are published on the arXiv server.
A successful grab isn’t always totally what you might call success, though. “Some of the grasps, such as the red gun… are reasonable but still not successful due to the gripper size not being compatible with the width of the object,” explains the team. “Other times even though the grasp is “successful”, the object falls out due to slipping.”
There’s still some way to go before robots can learn to perform delicate tasks for themselves then — but this is a start.