Deep structured learning is one of computer science's most intriguing disciplines. Essentially, it involves the creation of computer systems that can make reasoned decisions based on prior experience with learning data sets -- in short, a computer that can "think" for itself. But how do you build a machine learning system that actually works? This PowerPoint presentation attempts to map out the entire process in a single slide.
Professor Andrew Ng is chief scientist at web services company Baidu and one of the brains behind Deep Image; the most accurate computer vision system in the world. At this year's Nvidia GPU Technology Conference, Ng gave a speech on the principles of deep learning in machines, including a layman's guide to building new systems that work.
The above slide provides the basic recipe for successful machine learning (start at the top left and follow the arrows to complete the steps). Ng explained the process thusly:
When I'm building a machine learning system, the first thing I ask is "does it do well on the training data?" If it doesn't, then I would build a bigger network, or "rocket engine", so you have more neurons, more weights to try and fit the training data well. Once you fit the training data well, you see if it fits the test data or development data. If it doesn't do well on the test data but you're doing well on the training data, that means you're overfitting. The most reliable cure for overfitting is to get more data, to get more rocket fuel. And then you keep going around and around and around until eventually it does well in the training data, it does well in the test data and then hopefully you're done.
Ng advised that this was a highly simplified explanation of what his job entails and often computer scientists still run into problems even after following these steps. At this point, you need to modify the network architecture. Or cast some black magic.
You can watch Ng's full keynote address below: