Even the most expensive smartphones can’t match the image quality of the priciest digital cameras, but with a beefy processor at their disposal, smartphones let you do so much more with the photos you snap. The iPhone’s Portrait Mode is a good example, letting you tweak the lighting of a photo after you snap it, but it’s a feature limited to Apple’s high-end handsets. Researchers from Google and the University of California, San Diego have found a way to recreate this feature, with even better results, on even the most basic camera-equipped phones.
Apple’s Portrait Mode uses the smartphone’s multiple cameras on the back to take several photos of the same scene, which are then compared against each other in software to automatically generate a depth map of the image. In layman’s terms, a depth map is a simple black and white representation of an image that defines how far away objects in a scene were from the camera.
It allows Portrait Mode to separate foreground objects — like people and pets — from the background of an image so that it can be blurred to help draw the human eye to the most important parts. But it also allows iOS’ Portrait Lighting to distinguish features on a person’s face so that lighting adjustments, which are completely faked, look as natural and genuine as possible.
As researchers from Google Research and the UC San Diego have now shown, a feature like changing the lighting on a photo after you’ve taken it might soon not require an ultra-pricey smartphone with multiple cameras on the back. In a paper being presented at the 2019 Siggraph conference taking place in Los Angeles, California, next week, the researchers detail how a properly trained AI can recreate the same functionality, but with just basic camera phone hardware and arguably better results.
The neural network used for this process was trained on a comparatively small demographic and sample group of people: just 18 individuals were placed on a specially built light stage and photographed from seven different angles while surrounded by a sphere of lights firing from 307 different directions. The results yielded a much larger database of human portraits demonstrating hundreds of ways the human face appears and reacts to light coming from different directions, according to the researchers.
The demographics of the subjects, which included “7 male Caucasians, 7 male Asians, 2 female Caucasians, 1 female Asian, and 1 female of African descent” skewed towards those with lighter skin tones, so the researchers selectively and manually assembled the training sets to avoid under-representing minorities.
By analysing those results, researchers claim, the AI was able to learn how to apply the same results to existing photos to recreate almost any lighting condition on a human subject’s face, including optionally altering the background to match. For comparison, Apple’s Portrait Lighting feature will only offer six different relighting options when iOS 13 arrives in the spring.
This new approach could theoretically let a user reposition the light source anywhere they want in 3D space, with the appropriate results applied to the image, including accurate shadows and colours on a subject’s face.
The researchers boast the technique can generate a 640x640-pixel image in just 160 milliseconds. That’s fast enough for generating real-time, scaled previews on a smartphone’s screen. But to process a full 12-megapixel image snapped by a modern smartphone, that equates to about 4.7-seconds of number crunching. Not exactly real-time, and the research team doesn’t detail what kind of processor was used to achieve their results.
It could have been a desktop workstation. But every year mobile processors reliably see a boost in performance, and if this technology is ever implemented in a mobile OS like Android, the processing heavy work could always be offloaded to the cloud and a powerful server somewhere, making processing times feel insignificant.