Over at Healthcare Robotics, they’re still working on Project Clickable World, but what they’ve got so far is amazing. A green laser pointer serves as a “mouse” to select real world objects and command a robot to interact with them.
The researchers explain how this clickable world interface is supposed to work:
In our object fetching application there are initially virtual buttons surrounding objects within the environment. If the user illuminates an object (“clicks it”) the robot moves to the object, grasps it, and lifts it up. Once the robot has an object in its hand, a separate set of virtual buttons get mapped onto the world. At this point, clicking near a person tells the robot to deliver the object to the person. Clicking on a tabletop tells the robot to place the object on the table. While clicking on the floor tells the robot to move to the selected location.
Check out the video below to see this process in action and head over to the Healthcare Robotics page for more info and clips.