Across all three generations, the cameras on Google’s Pixel are extraordinary in their simplicity. You don’t get much in the way of manual controls, and even as competitors like Huawei, and others have added more and more sensors to the backs of their phones, the Pixel 3 and 3a have held firm with just a single rear camera.
On top of that, if you check out the specs for the Pixel 3’s camera like its 12-MP resolution and f/1.8 aperture, those figures don’t exactly standout compared to specs on other phones—there’s no 48-MP sensor or f/1.5 aperture here. And yet, when it comes to the kind of photos a Pixel can produce, the image quality you get from Google’s latest phones is often unmatched.
This gap between the Pixel’s specs and the results it puts out is something that stands in opposition to traditional smartphone camera development, which typically results in device makers trying to cram bigger lenses and sensors into their gadgets. So to find out more about Google’s innovative approach to making your cat photos (and everything else) look better, I spoke to Marc Levoy, a distinguished engineer at Google, and Isaac Reynolds, a product manager for the Pixel camera team, who are two of the leaders driving the development of Google’s photography efforts. You can watch highlights from my interview in the video below.
So what’s the other part of the formula for capturing high-quality pictures? Software, driven largely through techniques collectively known as computational photography. That said, Levoy was quick to point out that the field of computational photography is much bigger than just what Google is doing, but in short, it amounts to the process of using software and computers to manipulate a photo—or more often a series of photos—to create a final image that looks significantly better than the originals.
This is the principle behind the Pixel’s HDR+ camera mode, which takes multiple photos at different exposures and then combines them to preserve shadows and details better, while also enhancing things like resolution and high dynamic range. The use of computational photography even helps define “the look” of photos shot by a Pixel phone, because unlike other smartphone cameras, Levoy claims that the Pixel camera will rarely blow out highlights.
Sometimes, that means a Pixel photo might look underexposed, but in scenes like the one above, while the Galaxy S10’s shot is generally brighter and arguably more pleasing to the eye, it lacks a lot of detail in the sunset, which for me, was the whole reason why I snapped the pic in the first place.
Better-looking photos aren’t the only benefit of Google’s software-first approach to photography. It also makes the Pixel’s camera app easier to use. That’s because as powerful as Google’s software is, it’s not a big help if it’s so complicated no can use it.
Levoy explained that this balance creates a sort of creative tension, where after demoing a potential new feature to the Pixel team, the challenge becomes how to build it into the camera’s functionality so that a user doesn’t need to think about it to get results.
Night Sight is an excellent example of this because once you turn it on, there are no other settings you need to mess with. You just enable Night Sight and ap the shutter button. That’s it. Meanwhile, in the background, the Pixel will evaluate the amount of available light and use machine learning to measure how steady your hands are. This information is then used to determine how low to set the camera’s shutter speed, how many frames the camera needs to capture, and other settings to create the best possible image.
This streamlined approach to photography has its trade-offs, especially if you’re used to the traditional controls you might find in a DSLR or fancy mirrorless camera. Unlike camera apps on other phones, the Pixel doesn’t offer manual controls for setting things like shutter speed, exposure compensation, or ISO. This balance between high-quality results and user control is something the Pixel camera team constantly struggles with.
In the end, Reynolds summed it up by saying “If you could build a user interface that perfectly took that complexity—those three tap processes—and put them where they wouldn’t affect the one-tap user, absolutely. That sounds fantastic. But it’s impossible to actually hide those things way down under the hood like that. If you try to add a use case that takes three taps, you’re going to compromise the one tap. ” This is why when push comes to shove, Google always comes back to its one-tap mantra.
As a counterpoint, Reynolds pointed out that while other phones come with pro modes that allow people to tweak camera controls, typically, as soon as you switch out of auto and into manual, you lose a lot of the extra processing and AI-assisted photo enhancements companies like Huawei and Samsung have been adding to their handsets. The results of more control frequently aren’t better than leaving it all to the computer.
But perhaps the most significant advantage of computational photography may be for the average person who only buys a new phone every two or three years. Since much of the magic inside a Pixel’s camera rests in software, it’s much easier to port features like Night Sight and Super Res Zoom, which first made their debut on the Pixel 3, to older devices including both the Pixel 2 and the original Pixel.
This also comes into play on lower-priced devices like the $US400 ($574) Pixel 3a, because despite costing half the price of a standard Pixel 3, it delivers essentially the same high-end image quality. And in a somewhat surprising move, the newest addition to the Pixel camera—a new hyper-lapse mode—was first announced on the Pixel 3a before making its way to the rest of the Pixel family.
Sadly, when I asked about what might be the next feature heading to the Pixel camera, Levoy and Reynolds were a bit cagey. Personally, as impressive as the Pixel’s camera is, I still often find myself wondering what Google could do if the next Pixel had dual rear cams—perhaps one with an optical zoom. After all, the Pixel 3 does have two cameras in front for capturing standard and ultra-wide angle shots. I guess we’ll have to wait and see.