The Google Pixel 6 Pro's camera bar has, from left to right, a 25mm wide-angle main camera, a 16mm ultrawide, a 104mm telephoto and a flash.Stephen Shankland/CNET

Inside the Google Pixel 6 cameras' bigger AI brains and upgraded hardware

Exclusive: Google's new phone uses dozens of artificial intelligence tricks to try to capture the best photos and videos.

by · CNET

During this week's Pixel 6 launch event, Google demonstrated a handful of AI-powered photography tricks built into its new phones. Among the capabilities: erasing photobombers from backgrounds, taking the blur out of smudged faces and handling darker skin tones more accurately. Those features, however, are just the tip of an artificial intelligence iceberg designed to produce the best imagery for phone owners.

The $599 Pixel 6 and $899 Pixel 6 Pro employ machine learning, or ML, a type of artificial intelligence, in dozens of ways when you snap a photo. The features may not be as snazzy as face unblur, but they show up in every photo and video. They're workhorses that touch everything from focus to exposure.

The Pixel 6 cameras are AI engines as much as imaging hardware. AI is so pervasive that Pixel product manager Isaac Reynolds, during an exclusive interview about the inner workings of the Pixel cameras, has to pause when describing all the ways AI is used.

"It's a hard list because there's like 50 things," Reynolds said. "It's actually easier to describe what's not learning based."

All the AI smarts are possible because of Google's new Tensor processor. Google designed the chip, combining a variety of CPU cores from Arm with its own AI acceleration hardware. Plenty of other chip designs accelerate AI, but Google's approach paired its AI experts with its chip engineers to build exactly what it needed.

Camera bumps become a mark of pride

With photos and videos so central to our digital lives, cameras have become a critical smartphone component. A few years ago, phone designers worked for the sleekest possible designs, but today, big cameras are a signal to consumers that a phone has high-end hardware. That's why flagship phones these days proudly display their big camera bumps or, in the Pixel 6's case, a long camera bar crossing the back of the phone.

AI is invisible from the outside but no less important than the camera hardware used. The technology has leaped over the limits of traditional programming. For decades, programming has been an exercise in if-this-then-that determinism. For example, if the user has enabled dark mode, then give the website white text on a black background.

With AI, data scientists train a machine learning model on an immense collection of real-world data, and the system learns rules and patterns on its own. For converting speech to text, an AI model learns from countless hours of audio data accompanied by the corresponding text. That lets AI handle real-world complexity that's very difficult with traditional programming.

The AI technology is a new direction for Google's years-long work in computational photography, the marriage of digital camera data with computer processing for improvements like noise reduction, portrait modes and high dynamic range scenery.

Google's AI camera

Among the ways the Pixel 6 uses AI:

  • With a model called FaceSSD, AI recognizes faces in a scene in order to set focus and brightness, among other uses. The phones also geometrically changes face shapes, so people don't have the oblong heads common when they're positioned near the edges of wide-angle shots.
  • For video, the Pixel 6 uses AI to track subjects.
  • AI stabilizes videos to counteract camera movement from shaky hands.
  • When taking a photo, AI segments a scene into different areas for individual editing changes. For example, it recognizes skies for proper exposure and noise reduction to eliminate distracting speckles.
  • The Pixel 6 recognizes who you've photographed before to improve future shots of those people.
  • Google's HDRnet technology applies the company's AI-based technology for still photos to video frames. That gives videos better attributes, such as exposure and color.
  • For Night Sight shots, a low-light technique Google pioneered that can even photograph the stars, hardware-accelerated AI judges the best balance between long exposure times and sharpness.

Each year, Google expands its AI uses. Earlier examples include a portrait mode to blur backgrounds and Super Res Zoom to magnify distant subjects.

On top of that are the snazzier new AI-powered photo and video features in the Pixel 6: Real Tone to accurately reflect the skin of people of color; Face Unblur to sharpen faces otherwise smeared by motion; Motion Mode to add blur to moving elements of a scene like trains or waterfalls; and Magic Eraser to wipe out distracting elements of a scene.

Better camera hardware, too

To make the most of the debut of its first Tensor phone processor, Google also invested in upgraded camera hardware. That should produce better image quality that's a foundation for all the processing that happens afterward.

The Pixel 6 and Pixel 6 Pro have the same main wide-angle cameras and ultrawide cameras. The Pro adds a 4x telephoto lens. In traditional camera lens terms, they have focal length equivalents of 16mm, 25mm and 104mm, said Alex Schiffhauer, the Pixel product manager who oversees hardware. That's 0.7x, 1x and 4x in the camera interface.

The main camera, the phone's imaging workhorse, has a sensor 2.5 times larger than that of last year's Pixel 5 for better light-gathering abilities. It's a 50-megapixel sensor, but it produces 12-megapixel images because Google combines 2x2 pixel groups into a single effective pixel to cut noise and improve color and dynamic range.

The ultrawide camera got improved lenses compared to last year's, Schiffhauer said. It doesn't have as wide a field of view as the iPhone's 13mm ultrawide cameras, but Google wanted to avoid the peculiar perspective that becomes more apparent as lens focal length shortens. Of course, some photographers like that novelty, but for Google, "We want things to look natural when you photograph them," Schiffhauer said.

The most unusual camera is the Pixel 6 Pro's telephoto, a "periscope" design to squeeze a relatively large focal length into a thin phone. In most phone cameras, the image sensor lies flat within the camera body, but the 4x lens first uses a prism to redirect the light 90 degrees within the camera body to accommodate the relatively long light path telephoto lenses need.

The 4x camera is bulky, but in fact it's the unusually large main camera sensor that accounts for the Pixel 6's camera bar's thickness, Schiffhauer said.

Hardware, software and AI

Google's traditional software also gets an upgrade in the new Pixel phones.

For example, Super Res Zoom, which can use both AI and more traditional computational techniques to digitally zoom into photos, gets an upgrade this year. In previous years, the technology gathered a little more color detail about the scene by comparing differences between multiple frames, but now it also varies exposure for better detail, Schiffhauer said.

The fundamental process of taking a typical photo on a Pixel hasn't changed. With a process called HDR Plus, the camera captures up to 12 frames, most of them significantly underexposed, then blends them together. That lets a relatively small image sensor overcome shortcomings in dynamic range and noise, so you can take a photo of a person in front of a bright sky.

It's a good enough technique that Apple adopted it for its iPhones. But Google also blends in up to three longer exposure frames for better shadow detail.

With HDR Plus, AI and upgraded camera modules, the phones embody Google's three-pronged device strategy, Schiffhauer said, "to innovate at that intersection of hardware, software and machine learning."