UAVs are great, but most of them are also dumb as a sack of batteries and plastic. So dumb, in fact, that they have a whole chapter of YouTube devoted to their crashes. But a PhD student at MIT thinks he's figured out a way to give them brains -- or the next best thing.
As Andrew Barry, a PhD candidate at MIT's Computer Science and Artificial Intelligence Lab, puts it: it's not hard to build a drone these days. The hard part is "how to get them to stop running into things." Barry's alternative is an algorithm that uses some clever and efficient programming to make a drone fully autonomous. In a new high-speed demo video, we get to see a high-speed demo of the technology involving some very dramatic chases through a sun-lit forest.
The problem itself is fairly simple: Small-scale UAVs like the ones many amateurs and tinkerers own aren't designed to autonomously avoid obstacles because they aren't capable of carrying the weight of the processors they'd need to analyse the world around them and react to it. A drone's camera might record hundreds of frames per second, and analysing the depth of field for every object in each frame takes some serious firepower. So instead, human pilots on the ground -- who rely on sight, camera feeds, or software -- steer, and frequently wreck, their steeds.
Barry and his collaborator, Professor Russ Tedrake, built an algorithm that takes a different approach. Rather than try to analyse every object in every frame captured by a drone's camera, they set up a threshold distance -- 10 metres -- and only analysed those frames. Using two stereo onboard cameras recording at 120fps, their software simply looks at objects that are 10 metres away. "As you fly, you push that 10-meter horizon forward, and, as long as your first 10 meters are clear, you can build a full map of the world around you," Barry writes on CSAIL's website.
Everything closer, it ignores. Everything further, it ignores. But because it's flying at 48km/h, it doesn't need to spend time analysing all those extra frames and objects. "While this might seem limiting, our cameras are on a moving platform (in this case, an aircraft), so we can quickly recover the missing depth information by integrating our odometry and previous single-disparity results," they write in a paper published on ArXiv.
This selective approach cuts out a huge amount of the processing; it can run on a mobile CPU with minimal weight for its various parts. It could enable, they say, "a new class of autonomous UAVs" that can fly through complex environments without any help from the ground. The great thing about this research? They have put the algorithm online for anyone to try. You can get it here.