So, why the delay?
Well, not totally happy with the approach so far, I decided to dive a bit deeper into the process of getting to true sRGB. My YUV theory and the formulas do work (although I now think I'm not officially allowed to call it YUV... the approach works because of the relative relationship between green, red and blue on the sensor - a manufacturer specific YUV if you will...), but seeing how every sensor is different, it's hard to tell how 'green' the green result actually is.
And that was bothering me.
So I then diverted to a more official approach, by applying white balance and then using a color matrix.
Initially I used the sensor data DxO has published on their website.
But there seems to be a problem: the relationship between their documented white balance and the color matrix they present is unclear. It's not specified how they get to their numbers (it is according to an ISO standard, but I was unable to find the exact calculations)...
It seems the matrix might have to be different under different lighting conditions or with a different white balance. And you can't simply apply their documented white balance, because white balance is photo specific - either 'auto' by the camera, or set by the user (stored in the ShotAsNeutral tag of the DNG).
It's all kinda vague, and I wasn't thrilled with the results: a kind of weak green filtering and a humongous strong - really over the top - red and blue filtering.
And seeing how it's unclear how to balance it all out, I'm not sure about this one.
On to the third approach, the most complex one: convert the RAW colors through a matrix to a profile connection space (XYZ), and then use a generic sRGB color matrix to convert that one to sRGB. The conversion to XYZ is the most complex one. It takes into account two color matrices, two forward matrices, camera calibration, white balance and white point settings through different illuminant settings, all taken from the camera profile.
All that data is used to produce nine numbers, which are then used to convert the RAW sensor data to the XYZ color space.
Luckily Adobe provided most of the code to accomplish that conversion.
Then the XYZ data is converted to sRGB through another matrix.
This approach also works and gives better results than the DxO approach.
But also here there's a catch: These matrices and the conversion are rather complex and tight. It's very difficult to figure out where and how to apply the strength setting or where to incorporate that infamous black level. There's too many variables involved over too many layers and it's unclear where to tweak what.
I didn't get very far yet adapting this approach to fit what I want to accomplish.
In the end, after this rather exhausting detour of different approaches, I think the best results were with my first attempts, based on the YUV idea.
So I implemented that one, including white balancing, and I'm quite happy with the result.
But as to not throw away my other hard work on the third attempt, I'm contemplating to create another set: now the full sRGB or AdobeRGB. They won't have a strength setting, and you'll be looking at the pure red, green and blue result, fully based on camera profiles. Because here's the rub: the RAW red also contains a little bit of blue and green. And the RAW blue also contains a small amount of red and green. And the present filtering doesn't take that into account. That's why the sRGB filtering on e.g. red is so much stronger: It's the true non-diluted red. Noisy and over the top, because the little bit of blue and little bit of green - still left with the other filters - is now also gone.
This second set won't be in the next release though, let's first present what I have now, which will take a few more days to finish up.
No comments:
Post a Comment