The Google Pixel 2 has the best camera of any smartphone. This much we already know to be true. It also has an excellent portrait mode for blurring the background in photos, but it doesn’t use two camera lenses to do that like the iPhone or Samsung Galaxy Note8. Instead, it uses some very smart software and machine learning, and the minute difference between the pixels in its camera sensor, to simulate blur that looks very nearly as good as a much larger and more expensive digital SLR.
In a new research blog post, Google engineers Marc Levoy and Yael Pritch detail the techniques Google has been combining to enable the Pixel 2 to capture photos with smoothly blurred background areas. There’s two methods that when combined, give an amazingly effective result despite not using two separate camera lenses and sensors with offset positions.
The first (the first panel above) is a basic foreground-background segmentation using machine learning on a neural network to determine which pixels belong in the foreground and which belong in the background, using Google’s massive library of faces — nearly a million, some wearing hats and sunglasses and holding things — as source material. The resulting depth map is “not too bad”, says the engineers — but it can be better.
The second technique (the second panel) is more intricate. It uses a stereo algorithm like a dual-lens camera setup on the iPhone 7/8 Plus and Samsung Galaxy Note8, but with a single lens. It does this by differentiating between the very slightly different dual phase-detecting subpixels on each pixel on the sensor — barely a millimetre apart on the Pixel’s tiny sensor, but enough to distinguish the background versus the foreground. That data can then be used to create a second depth map.
Then, the combined depth maps are compared and combined — and that’s Google’s “secret sauce”, the proprietary data that decides how much blur to apply to each background or fore-foreground segment of the image. It uses the software segmentation to decide what’s background, and the depth map to decide how aggressively to blur it, using a translucent disc of the same colour as the pixel that’s being blurred, but of a different size based on its position.
Portrait Mode on the Pixel 2 takes four seconds to capture an image, according to Google. Beyond portraits — which use both techniques described above, the portrait blurring can also be used on close-up objects like flowers between a metre and 10cm from the lens using the dual-pixel depth mapping. The other half of the equation — machine learning and pure software grunt — is used for the Pixel 2’s surprisingly effective selfie portrait blur mode.
Google also has some tips for taking Portrait Mode photos for aspiring snappers:
- Stand close enough to your subjects that their head (or head and shoulders) fill the frame.
- For a group shot where you want everyone sharp, place them at the same distance from the camera.
- For a more pleasing blur, put some distance between your subjects and the background.
- For macro shots, tap to focus to ensure that the object you care about stays sharp.