• Friday, April 26, 2019

Differences between RGB and monochrome sensors

Monochrome, black-and-white camera sensors are able to capture much more details compared to standard RBG sensors. However, in order to understand why we’ll have to take a closer look at how camera sensors work. This blog will explain the main differences between these two camera sensor types, and it will also explain how both these sensor influence the final photo.

Inside every camera sensor, there are many pixels. Each individual pixel contributes to the light gathering, which means that each individual pixel has an impact on the final photo. Pixels are placed in a net and essentially pick up light in a way similar to how many buckets would pick up rainwater. When exposed to light, each sensor gathers light, and this data is then quantified and stored in the memory.

These sensors only store the amount of light, and in order to capture light, a completely new system needed to be developed.

Color capturing sensors can only capture one color each, whether it be red, green or blue. These pixels are arranged in such a way that there are never two of the same sensors next to each other. This setup is called Color Filter Array, or CFA. The best known and most widely spread such setup is called Bayer, and in this setup, we can see sensors placed on red-green and green-blue rows. This is what it looks like:

However, exactly because we want specific pixels to capture only specific colors, that means that effectively each pixel registers only a third of the data it’s exposed to, because other two colors are filtered out. For example, all red and green shades that fall on a blue sensors will not be captured at all.

It is also important to mention that each pixel can capture one color directly, while other two colors need to be inferred from the surrounding sensors. This process infers colors quite precisely, especially because Bayer technology is now being developed for more than 10 years.

As opposed to the RGB sensor, all pixels inside of a monochrome sensor capture light equally. Colors don’t get filtered out, which is why every pixel captures 100% of the light that falls onto it (compared to only one third).

Also, these pixels don’t go through the color inferring process because light values captured by each individual sensors are all stored directly into the memory. This is why monochrome sensors make photos of higher resolution.

Not only do these two sensor types differ in how they are built, but also in how they are used at the end. Monochrome sensors have the advantage of producing images with lower noise levels and higher resolution.

On the other hand, it’s not always easy to make a choice whether to use the monochrome or RGB sensor, because the RGB one has its own merits. True, the resolution reduces, but editing options are much greater regarding post-production. At the end of the day, it’s easy to reduce colors to black and white, whereas it’s impossible to do so the other way around.

All this said it’s easy to see why smartphone manufacturers are choosing to place both these sensors inside of the camera setups in their phones. AI algorithms that deal with the cameras are advanced enough to recognize which details they need to take from which camera in order to produce very detailed, high-resolution color photos.