Robots Encountering Socks
Filed by KOSU News in Science.
February 6, 2012
“Consider the perceptual challenges inherent in the robotic manipulation of unseen socks,” says an engineering team at the University of California, Berkeley.
Suppose you’re a robot. If you had a camera in your head, and you could watch a human doing a simple task, like bunching a pair of socks, could you, just by watching, learn to do it too?
Well, let’s see…
Pieter Abbeel runs a lab at Berkeley that builds what he calls “Apprentice Robots.” They are not built the usual way, with lines of code telling them exactly what to do. No, instead, they are given “perception mechanisms” to analyze what they’ve seen, then “planning and simulation” mechanisms, to copy tasks. And, through trial and error, it seems they can learn.
In this case, the robot in the video has to grasp the correct (open) end of each sock, even though they are pointed in different directions, and then put them on the post. Apparently Abbeel’s robots can study a person or even a series of photographs and figure out how to do this, sometimes after only ten or so demonstrations.
Technology Review magazine says “Abbeel taught one robot how to fold laundry by giving it some general rules about how fabric behaves, and then showed it around 100 images of clothing so it could analyze how that particular clothing was likely to move as it was handled.” No live human instruction. Just pictures.
In this towel-folding video, you can almost feel the robot studying the cloth, trying to figure out which two points are farthest apart and therefore the best places to grasp and fold. It’s spooky.
What these videos tell us, is that what we humans can do so easily — most three year olds can fold socks and towels — are, when you break them down, highly complex behaviors. “Socks,” Abbeel writes, “are extremely irregular. [They] may be right-side-out, inside out, or arbitrarily bunched.” Knowing how to unfold and handle them is, mathematically, an extraordinarily subtle business.
It’s not that robots are stupid. It’s that we are so smart. And what Abbeel is exploring, is how to give the robots a kind of bottom-up intelligence that allows them to, on their own, do tasks and make sense of an anything-can-happen world.
The most amazing robot I’ve seen lately is designed for just that — to improvise solutions in messy, chaotic situations. Boston Dynamics has a bot they call “Big Dog”. This is it:
It looks like a 4-legged tube, its legs oddly facing each other. But if you give it a fierce kick, try to knock it down, make it climb through mud, skid along ice, trek through snow, climb a jumble of cinderblocks, while it sometimes falls into a helpless plop, much of the time it can right itself, and keep going. How it learns this, I’m not sure. Developed by the Defense Department to go where soldiers fear to tread, it has, (or this music video makes it look like it has) an almost animal-like ability to cope in a slip-slidy, bad, bad world. Even without the music, I’m bug-eyed with admiration. [Copyright 2012 National Public Radio]