Modern smartphones come with powerful camera systems and a lot goes behind the scenes to make your photos look beautiful. An example of this is pixel binning.
You must have seen Samsung use terms like “nona-binning” or “Adaptive Pixel” in its marketing when referring to pixel binning, claiming it improves low-light performance. But is it really so? Let’s take a look at what pixel binning is, why it is used and how it works.
Why Smartphone Cameras Use Pixel Binning
Before learning what pixel binning is and how it works, you need to understand why it exists. See, smartphones have a big problem when it comes to cameras: size limitation. A camera sensor is basically a plate of millions of pixels that captures ambient light. So the more pixels there are, the more light they can capture to produce a better image.
When we say “pixel” in this context, we don’t mean the pixels on the screen that emit light, but instead the photosites on the camera sensor that capture light. This light is then converted and used to produce the image you see on your screen.
Now, here’s the problem: as we keep adding more pixels, we’d also have to make the sensor bigger to fit them in. This is difficult because the camera module on a phone is just a part of its body; you also have to fit the battery, motherboard, speaker, and plethora of sensors into a smartphone.
To overcome this limitation, technology companies came up with a smart solution. Rather than making the sensor absurdly large, they shrunk the pixels themselves and placed more pixels in a given space to increase the maximum theoretical image resolution.
For reference, the 12MP sensor on the iPhone 13 has a pixel size of 1.9 µm (micrometers), but the same is 1.22 µm on the 48MP sensor on the iPhone 14 Pro. And the 108MP sensor on the Galaxy S22 Ultra has pixels just 0.8µm in size – one of the smallest we’ve seen.
What is pixel binning? How does it work?
Pixel binning is an image processing technique in which four or more adjacent pixels in a camera sensor are combined to form a superpixel (or “tetrapixel” or “nonapixel” as Samsung calls it) that carries the sum or average value of all the pixels in it.
Note that at the hardware level, the pixels do not physically move or blend into each other; it’s simply their photonic data combined via software to imitate a larger pixel.
Let’s understand this through an example with the iPhone 14 Pro Max and Galaxy S22 Ultra. The iPhone 14 Pro Max does 4-in-1 pixel binning (2×2 array) to lower the resolution of the image from the original 48MP to 12MP. Likewise, the S22 Ultra does 9-in-1 pixel binning (3×3 array) and drops the resolution from 108MP to 12MP.
Reducing the resolution in this way will allow your phone to process photos faster, so you can view a shot immediately after clicking it. Shooting at full resolution, on the other hand, creates an excessive workload and takes much longer to process.
Also remember that megapixels and megabytes are not the same things. Megapixels refer to the number of pixels present on the sensor (a fixed unit) and megabytes refer to the size of the image file (a variable unit), which depends on how much information is in your shot.
For example, the Galaxy A53 has a 64MP camera and does 4-in-1 pixel binning to capture 16MP shots. By default, 4624 x 3468 resolution images are needed for a total of 16,036,032 pixels or just 16 MP (one megapixel is one million pixels). If you switch to full-resolution mode, you’ll get images at a resolution of 9248 x 6936 for a total of 64,144,128 pixels, or 64 MP.
Pixel binning does not guarantee better photos
Here’s something that might be hard to swallow: pixel binning is a solution to a fake problem. The whole idea behind pixel binning is that you can place more but smaller pixels instead of fewer but larger pixels on a camera sensor. This is not necessary because a larger individual pixel will always capture more raw light.
In comparison, a superpixel of the same size containing the photonic data of several smaller pixels has to guess what the final shot should look like – and it doesn’t always do it well. This is why photos from Samsung phones sometimes look over-processed, while those from iPhones look more natural and consistent.
Tech companies like to brag about how many megapixels their new camera sensor has, which is why the average smartphone user has come to believe that a higher number of megapixels means better image quality. It doesn’t. The image quality is more determined by the size of the sensor itself, not by the number of pixels on it.
The number of megapixels determines the maximum image resolution in which your phone can photograph. The only practical benefit to this is that you can zoom and crop your photos without blurring them. The number of megapixels says nothing about color science, white balance, dynamic range or anything like that.
The alleged benefits of pixel binning are not a result of the technology itself, but of the powerful image processing algorithms and chipset in your phone. It’s the latter that does the hard work of making your shots look brighter, less grainy, and more vibrant.
The reason a lower-resolution pixel-binned photo can sometimes look better than a full-resolution photo is that applying image algorithms is more difficult on a larger photo because it uses more processing power. A smaller photo can be processed immediately.
Pixel Binning is a workaround, not a feature
Ultimately, the goal of pixel binning is to increase the maximum theoretical image resolution a smartphone camera can have, while decreasing it enough so that your phone can quickly process your photos for everyday use.
Image resolution is important because of course you want to zoom in on your photos without losing detail, but numbers like 108 MP are honestly not necessary.
The best way to make sure the phone you want to buy has a good camera system is to view camera previews and reviews. Don’t obsess too much about the technical details; if you like what you see, that’s the right camera for you.