...what's lurking in your DNG files that you haven't seen yet?
Turning M9 color DNGs into monochrome DNGs
A series about the development of an experimental piece of software, called DNGMonochrome, able to convert color DNGs into monochrome DNGs...
The software is available here.
Some background on digital sensors
For starters, the sensor in a digital camera can not register color. The pixels in the sensor can only measure the light hitting it and translate that analogue signal into a digital value. And that value doesn't tell you what color the light had when it struck the pixel.
To overcome this problem, a camera that needs to produce color photos is outfitted with a Bayer filter, pasted on top of its sensor.
This Bayer filter (named after its inventor) turns every sensor pixel into a pixel that can only register the red, green or blue component of the light hitting it. That way you can more or less figure out how bright the red light was at a specific spot on the sensor, compared to the green or blue light (note the 'more or less' here).
Above image of the Bayer filter only shows a tiny part of such a filter, since the sensor of the M9 contains about 18 million pixels, and every pixel has one of these three color filters (red, green or blue) on top of it. So imagine 18 million of these square colored thingies, one square thingy (either green or red or blue) on top of one sensor pixel.
Color cameras out there don't all use the same type of Bayer filter. The above image is specific to some cameras, but not to all. Some cameras use Bayer filters with four colors, some use Bayer filters with the colors in a different arrangement and some use filters with different colors altogether. And the Foveon sensor for instance, operates on an entirely different principle. It uses a layer technique to filter and split the light hitting it.
So what happens if you don't add a Bayer filter on top of the sensor? Well, if it isn't a Foveon sensor, then you get a camera that can only take black & white photos, since there's no way of knowing what the color was of the light hitting the sensor.
Such a camera would produce a monochrome RAW file.
But let's first stick to the color photo...
Getting to the color photo
First of all, those sensor values registered when you took the photo are put in a file. And since this file contains the direct sensor output (the 'raw' output) it's called a 'RAW' file. The Leica M9 produces a RAW file in the form of a DNG, which is an Adobe invention (an extension of the TIFF format actually) and stands for Digital Negative.
So when the RAW file comes out of your camera, it contains the values of every sensor pixel.
The RAW file then needs to be put through a RAW converter (Lightroom e.g. contains such a RAW converter), which can translate the values to colors on your screen. Which means the RAW converter needs to know what type of Bayer filter was on the sensor, else it can't determine which value is from the red, green or blue filtered pixel.
In an M9 DNG (or read 'M9 RAW file') that information is stored in the camera profile, which is stored in the DNG. The camera profile contains information that describes the structure of the Bayer filter (the mosaic info), so the RAW converter knows where to look for red, green and blue within the 18 million values the RAW file holds.
Then, in order for a pixel on your screen to show the correct color (as 'seen' by your lens and you), these different color values produced by the Bayer equipped sensor - put in the RAW file that comes out of the camera - need to be mixed back again.
This process of 'mixing back' is known as color interpolation and this step is also performed in the RAW converter (or 'in camera' if you shoot JPG and not RAW).
Here's the rub: a green sensor pixel needs to turn into a pixel on your screen that also contains some of the blue and red light, else it would stay green forever (which is fine if your subject was actually green, but not if it was orange, yellow or purple, to name a few colors - not to mention the myriad of shades of green out there in the real world)... and a red sensor pixel needs to borrow some of the green and blue light, else it would stay a red dot in your photo... similar for the blue pixel (you get the point).
The green, red and blue pixels need to borrow from each other to turn into a full color pixel on your screen.
All software that can read RAW files need to color interpolate in order to show a correct picture on your screen.
Something fishy and slightly evil
But you can already smell something fishy: those pixels borrowing from each other, that isn't something we should be happy about.
It's an evil necessity.
Without it we won't have a color photo, but with it we don't have an outstanding color photo, because we lose something in that whole process: color interpolation is basically fancy guessing. Because if you have a green pixel, and two blue pixels next to it... how much blue should the green one get? And where should a blue one look for its additional green and red? Up, or down, or left, or right, or south east, north west or in all possible directions, or in just a few?
Don't be shocked, but your color photo is actually a bit of lie... fancy guesswork.
An image torn apart in three colors and then mixed back again.
It's rather messy and a little bit violent...
The Algorithm
To add or not to add (and how much to add and from where to add), that's the question. And that question is answered by the color interpolation algorithm. And there are many of those algorithms around. From very simple 'just look at the neighboring pixel' to extremely fancy neural network approaches. If you start searching for color interpolation algorithms on the Internet, you can easily find 20 different approaches within minutes (well okay, perhaps a bit longer if you're new to this).
Back to monochrome
In an original monochrome RAW file, coming out of a black & white camera - with a sensor without Bayer filter - the color interpolation isn't necessary, since nobody is expecting color out of a black & white camera. Well, no, of course that's not the real reason, besides... some very unreasonable people - in a moment of weakness and when low on money (the Leica MM is not cheap) - might still expect a color photo out of such a camera (which is really impossible).
No, it's simply not necessary.
The pixels aren't torn apart into separate colors. They're one big happy family all working together to produce one photo that isn't split into three different colors.
It makes for increased resolution, and sharper photos, since the previously 'red' and 'green' and 'blue' pixel now carry the full information (they don't exist anymore as pixels that register a 'split' value). No 'evil' interpolation is necessary (of course, you do have to be happy with only black & white out of your camera and stay reasonable...)
It's much more peaceful.
And back to my brain again
Then I thought: what would happen if you treat an M9 DNG (or read 'M9 RAW file') as a monochrome DNG?
Let's not color interpolate and see what happens (I bet it's not pretty)...
... continue with part III
... back to part I
No comments:
Post a Comment