The ability to put our clothes on each day is something most of us take for granted, but as computer scientists from Georgia Institute of Technology recently found out, it’s a surprisingly complicated task—even for artificial intelligence.
As any toddler will gladly tell you, it’s not easy to dress oneself. It requires patience, physical dexterity, bodily awareness, and knowledge of where our body parts are supposed to go inside of clothing. Dressing can be a frustrating ordeal for young children, but with enough persistence, encouragement, and practice, it’s something most of us eventually learn to master.
As new research shows, the same learning strategy used by toddlers also applies to artificially intelligent computer characters. Using an AI technique known as reinforcement learning—the digital equivalent of parental encouragement—a team led by Alexander W. Clegg, a computer science PhD student at the Georgia Institute of Technology, taught animated bots to dress themselves.
In tests, their animated bots could put on virtual t-shirts and jackets, or be partially dressed by a virtual assistant. Eventually, the system could help develop more realistic computer animation, or more practically, physical robotic systems capable of dressing individuals who struggle to do it themselves, such as people with disabilities or illnesses.
Putting clothes on, as Clegg and his colleagues point out in their new study, is a multifaceted process.
“We put our head and arms into a shirt or pull on pants without a thought to the complex nature of our interactions with the clothing,” the authors write in the study, the details of which will be presented at the SIGGRAPH Asia 2018 conference on computer graphics in December.
“We may use one hand to hold a shirt open, reach our second hand into the sleeve, push our arm through the sleeve, and then reverse the roles of the hands to pull on the second sleeve. All the while, we are taking care to avoid getting our hand caught in the garment or tearing the clothing, often guided by our sense of touch.”
Computer animators are fully aware of these challenges, and often struggle to create realistic portrayals of characters putting their clothes on. To help in this regard, Clegg’s team turned to reinforcement learning — a technique that’s already being used to teach bots complex motor skills from scratch.
With reinforcement learning, systems are motivated toward a designated goal by gaining points for desirable behaviours and losing points for counterproductive behaviours. It’s a trial-and-error process — but with cheers or boos guiding the system along as it learns effective “policies” or strategies for completing a goal.
Using well-established machine learning techniques, researchers from University of California, Berkeley have taught simulated humanoids to perform over 25 natural motions, from somersaults and cartwheels through to high leg kicks and breakdancing. The technique could lead to more realistic video gameplay and more agile robots.Read more
The difference with self-dressing, however, is the need for haptic perception. Animated characters need to touch their clothing to infer progress. When dressing themselves, the bots must apply force to move their virtual arms through the clothing, while avoiding forces that could damage the garment, or cause a hand or elbow to get stuck.
Consequently, the researchers had to add a second important element to the project: A physics engine capable of simulating the pulling, stretching, and manipulation of malleable materials, namely cloth.
During the training process, a bot gained points by successfully grasping the edge of a sleeve or poking its head through the collar. But when an action resulted in tearing or getting its arms hopelessly tangled, it would lose points.
Very quickly into the project, however, the researchers realised that a single, coherent dressing policy wasn’t going to work. The complicated task of dressing had to be broken down into a series of sub-policies. But this makes sense; when we teach children to dress themselves, we teach it one step at a time.
The act of dressing can’t be broken down into a single philosophical policy — it’s a step-by-step process that leads toward a desired goal. Clegg’s team developed a policy-sequencing algorithm for this very purpose; at any given stage, an animated bot knew where it was in the dressing process, and which step was required next such that it could advance toward the desired goal.
Clegg and his colleagues say their new paper is the first to show that reinforcement learning, in conjunction with cloth simulation, can be used to teach a “robust dressing control policy” to bots, even though it’s “necessary to separate the dressing task into several subtasks” and have the system “learn a control policy for each subtask” to make it work, the authors write in the study.
Importantly, the study was limited to upper-body tasks; performing lower-body dressing tasks would have introduced an entirely new set of complications, such as maintaining balance while putting on pants. Also, the system was computationally demanding.
Eventually, the researchers would like to incorporate memory into the system, which could “reduce the number of necessary subtasks and allow greater generalization of learned skills,” the authors write. Indeed, like the toddler who quickly acquires competency and flexibility through experience, the researchers would like their system to do likewise.
As a final note, this study shows how difficult it will be to create general artificial intelligence. It was a triumph of AI research to create machines capable of defeating grandmasters at chess and Go, but creating systems that can perform more mundane tasks—such as dressing themselves — is proving to be an enormous challenge as well.