The wide-angle camera lenses that have become more prevalent in smartphones are great for capturing shots of large groups of people—assuming you’re not one of the people near the edges or corners of the photo whose face ends up getting unflatteringly warped. Computational photography researchers have come up with a solution that can automatically fix distorted faces, without affecting the rest of the photo.
The bulging glass element on a wide-angle lens allows it to capture light from a wider field of view than a lens that looks comparatively flat. So when trying to take a selfie with your phone held in an outstretched hand, a group of friends doesn’t have to awkwardly squeeze together to fit into the frame.
The squeezing actually comes afterward, in the camera, where all of that additional optical information has to fit onto a square image sensor. The results don’t end up looking quite as severe as staring into a reflective ball (think Escher’s self-portrait) but elements closer to the edges and corners of a photo’s frame often end up slightly distorted.
Apps like Adobe’s Photoshop can be used to undo or fix unwanted distortion, but it’s a tool that takes a certain level of skill to use effectively. The average smartphone user is not a Photoshop master, so researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and Google have created an automated algorithm that can be added to any camera app to automatically fix warped faces, without distorting other objects in the same shot.
The warping around the edges of a wide-angle shot can be fixed relatively easy, but at the cost of everything else closer to the middle of the frame getting warped as a result. Fixing only specific parts of an image is much harder, requiring the researchers to generate a content-aware grid that represents how detected faces have been warped, which can then be used to limit where de-warping transformations occur.
At the same time, the algorithm also ensures that the de-warped areas of an image blend smoothly into surrounding areas of a photo that haven’t been corrected, so that objects with straight lines, such as buildings behind a group of people, don’t end looking like something out of a Dr. Seuss book.
All of the algorithm’s calculations and corrections are fully automatic, so a user would only ever need to click a single button in a photo editing app to make a fix. But it could even be integrated into a camera app and applied to wide angle photos on the fly as the algorithm is fast enough on modern smartphones to provide almost immediate results.
There’s no word on when a feature like this will officially start appearing on smartphones, but the researchers behind it all currently work for Google so it wouldn’t be that surprising to see it first appear on a future Pixel phone.