The Future of Image Sensor Technology – Beyond the Bayer CFA

May 17th, 2016
The Future of Image Sensor Technology - Beyond the Bayer CFA

For a decade, the Super 35mm Bayer CFA (Color Filter Array) CMOS sensor has powered the digital cinema revolution. I’d like to present a couple of interesting technologies which may well end up replacing it.

The Bayer CFA has become the de-facto standard architecture of single-sensor digital cameras of all types — stills and video.

Let’s start by explaining what a Bayer CFA is.

The Bayer CFA

bayer CFA mosaicAn image sensor is made up of a matrix of millions of light sensitive photosites. A single photosite is only sensitive to luminance; how many photons hit it in a given period to create a charge. Because a single bare photosite is color blind, we have to arrange an array of colored filters on top of the sensor. A Bayer filter mosaic is a color filter array (CFA) for arranging RGB color filters on a square grid of photosensors. Its particular arrangement of color filters is used in most single-chip digital image sensors used in digital cameras, camcorders, and scanners to create a color image. The filter pattern is 50% green, 25% red and 25% blue.

Bryce Bayer registered his patent (U.S. Patent No. 3,971,065) in 1976. He referred to the green filtered photosites as luminance-sensitive elements and the red and blue ones as chrominance-sensitive elements. He used twice as many green elements as red or blue to mimic the physiology of the human eye.

This Bayer pattern data from the sensor is what we call RAW image data.

bayer pattern dataTo reconstruct a full-color RGB image from the data collected by the color filter array, some form of interpolation is needed to fill in the blanks. The mathematics here is subject to individual implementation and is called demosaicing.

Demosaicing can be performed in different ways.

Simple methods interpolate the color value of the pixels of the same color in the neighborhood. For example, a pixel with a green filter provides an exact measurement of the green component. The red and blue components for this pixel are obtained from the neighbors. For a green pixel, two red neighbors can be interpolated to yield the red value. Also, two blue pixels can be interpolated to generate the blue value.

Pro’s and Con’s

On the surface, the Bayer CFA may seem like the ideal solution to capturing color information, however, it involves some compromises.

  • Any single photosite can only capture a Red, Green or Blue sample for its position in the matrix making real full RGB capture at every photosite impossible.
  • The reconstructed RGB image will always be the result of mathematical guesswork.
  • It can be argued that the effective resolution of an image captured from a Bayer CFA is substantially less than the sensor’s photosite count.
  • Aliasing and Moiré can be introduced as a result of demosaicing.
  • The colored filters themselves absorb and reduce the amount of light reaching the photosite, reducing the overall sensitivity of the sensor.

It’s not all bad, though.

  • Storing the RAW Bayer data results in a substantial reduction in file data compared to the full raster RGB equivalent (uncompressed).
  • RAW Bayer data can be re-interpolated at a later stage as demosaic algorithms and methods improve resulting in increased image quality.

The Bayer CFA is currently the “least bad” and most cost effective solution available when compared to other practical alternatives. Recently, however, some new technologies have surfaced which may finally eliminate the need for these compromises.

Panasonic’s Low Light Filterless Sensor Technology

Panasonic have made the news with their recent OPF (Organic Photoconductive Film) technology promising higher sensitivity, wider dynamic range, improved global shutter and variable sensitivity.

Panasonic also claimed an unusual sensor architecture a few years back which separates colors by diffraction rather than absorption. This seems to have gone quiet, but it’s an interesting development to mention regardless.

Panasonic OPF
Instead of using an array of tiny microfilters in a traditional CFA, the alternative approach uses what Panasonic calls “micro color splitters” that diffract the light so that various combinations of wavelengths (colors) hit different photosites. In their paper in Nature Photonics, Panasonic’s researchers claim their solution allows the sensor to gather 1.85 times more light than traditional Bayer-array-based sensors.

This technology is not perfect. The photosites in these proposed sensors are still not capturing full RGB values; rather they are capturing combinations of colors: white+red, white-red, white+blue, and white-blue, that come out of the two deflectors.

This means there is still a demosaicing process—and one which is particular to Panasonic.

This was one of the downfalls of Foveon’s unique technology, which captured all of the light hitting the sensor by layering the three color receptors on top of one another. Each layer stripped off the color of light to which it was receptive, passing along the rest. Foveon did not have the benefit of other industry players developing the technology, software, and hardware needed along with them. As a result, it was years before Foveon had effective noise reduction and powerful enough processing to produce JPEGs in the camera.

Like Foveon, Panasonic has buried its invention under a thick pile of patents. So far.

University of Utah’s New Filter

University of Utah filterA recently published article from the University of Utah presents a new filter developed by Electrical and Computer Engineering professor Rajesh Menon that drastically improves the light transmission efficiency compared to the traditional Bayer CFA.

The filter is only about one micron thick and uses precisely designed ridges etched on one side to bend the light as it passes through creating a series of color patterns or codes. The software then reads the codes to determine what colors they are.

Approximately 25 color codes are created as opposed to three, resulting in far more accurate renditions of color and nearly no noise.

The filter is also cheaper to produce than the current Bayer filter.

The full-color image will also be the result of sophisticated computer processing, so again there is not true full-color spatial resolution.

Looking Ahead

All of these technologies require complex algorithms and interpolation to produce a final full-color image, and it’s too early to judge which of these technologies will give the best results. As is the case with light field technology as demonstrated by Lytro Cinema, we may well see as much growth and development on the computational and processing side of things as the physical sensor itself.

We don’t know what may eventually surpass and replace the Bayer sensors we all rely on now. One thing is certain, though. The future of digital imaging will undoubtedly remain dependent on advanced algorithms and powerful image processing.

What we do know is that technology marches forward, fuelled by the fact that we’re always going to buy a better camera.

6
Leave a reply

guest
Filter:
all
Sort by:
latest
 Paul Fortunato
Member
June 3rd, 2016

I absolutely agree that image processing is likely where we will see the biggest progress. An example of this is that new Light camera (not Lytro), that uses 10 or 11 different cell phone sensors at once to create an image and record different DoF information.

I’m under the impression that currently, dedicated camera processors are sort of dinosaur tech to begin with… still being built on quite old, 45nm wafers. What I’ve always wondered is: Why? Why are these processors, which are also responsible for encoding/decoding faster framerates for both stills and videos, still being built on such old technology, and why hasn’t the camera manufacturers just not purchased even an entry level snapdragon processor for their processing?

Anonymous
Anonymous
Guest
May 18th, 2016

Great article!

Eno Popescu
Eno Popescu
Member
May 17th, 2016

Those statements are very exaggerated:

” the effective resolution of an image captured from a Bayer CFA is substantially less than the sensor’s photosite count.”

It has been proven that with current demosaicing algorithms tech only 30% of the 3 full color image resolution is lost…this is far from a “substantial” loss.

“Storing the RAW Bayer data results in a 33% reduction in file”

This is totally incorrect, the raw image occupies 3 times less space than an equivalent full color image. Just compare a RAW with a tif image and see it for yourselves.

I’m not saying that we muss not look beyond Bayer tech, but at the same time we must correctly see the whole picture. :)

Member
May 20th, 2016
Reply to  Eno Popescu

30 % is substantial. It’s the reason the BBC EBU tests of Red’s 4k, 5k, and 6k resolve “substantially” less than those numbers at 2.5k, 3.0k, and 3.4k respectively. Yes, 30 % is more than substantial considering manufacturers advertise their cameras as being 4k, etc but never tell you the dirty secret of what they truly resolve in.

While Bayer has it advantages, I still find the colors pale compared the best of CCD tech.

Eno Popescu
Eno Popescu
Member
May 24th, 2016
Reply to  Tim Naylor

30% is negligible, I call substantial something like 2 or 3 times better, but not a 0,3 times difference.

“While Bayer has it advantages, I still find the colors pale compared the best of CCD tech.”
That phrase makes no sense, most cameras with a CCD sensor have a Bayer pattern. :)

Lonn
Lonn
Guest
January 10th, 2019
Reply to  Eno Popescu

You may need to read up on the definition of “substantial” and get a better grasp of percentages.
If you lose 30% of your resolution, you are left with only 70% of the original resolution.
This reduces a 30MP sensor to 21MP.
I honestly don’t know how you can consider this to no be substantial.

Filter:
all
Sort by:
latest

Take part in the CineD community experience