So what's next?
Well, that would be me finally giving some attention to photos I took in recent months. Since yeah, this did start out as a photo blog. Some readers and viewers might be contemplating suicide by now, after being confronted with all this nerd talk, so maybe time for some images...
Also, I have to go back on a promise - which I don't like doing in general - but a port of DNGMonochrome to OS X will have to wait until I settle down a bit with experimenting. I have too much stuff I want to try out in the current version, before I can set myself to the quite boring and time consuming task of translating code, ending up with basically nothing new...
I am however looking into ways of restructuring the code I have now, to make the core more portable. Then only the user interface would have to be redone.
Where DNGMonochrome is concerned, I will be concentrating next on the algorithms and noise.
Algorithms
The aim was to get what I have now into only one algorithm: retaining the sharpness of the sharpest, but with less to no artifacts.
Currently I'm testing a new algorithm that comes very close.
It's faster than what I have now, it's sharper than the 'smooth' algorithm, and it's without any of the artifacts the current sharp algorithms can produce. No speckles on high ISO, no ringing on highlighted edges, nicely stepped diagonals with no pixels misbehaving...
It's close to what I want... it was just not as sharp as the sharpest I have now.
But after some tuning, I managed to solve that with an extra setting, optionally changing the innards of the algorithm slightly.
Overall it means going to only one algorithm with one extra option for sharpness, ditching three separate algorithms, ditching the mixover and the quality slider, gaining speed and basically improving the conversion, since the new algorithm doesn't misbehave and is still sharper than the current smooth one.
And with the extra option you can still get to gritty, noisy and sharper.
In both cases (with or without the sharper option) the algorithm produces sharper results than Lightroom turned B&W.
I still need to run some tests on this new stuff, and see if it also works out for the RAW red and the RAW blue, but if it does, it will most likely be in the next release.
Noise
At present, the noise reduction offered in DNGMonochrome is kinda lame.
Now the assumption was that 'lame' was actually okay, because the monochrome DNG is not your finished result: Lightroom or any other RAW converter you use can take care of the noise. I threw in those crusty median filters because I had experimented with them and they actually worked, so why not... but they are not very sophisticated...
But in the latest version I introduced RGB filtering: the red and blue interpolated results are mixed in with the regular result, so noisy pixels (especially the red channel can be quite messy) are introduced into the regular result if you use those filters.
To be able to perform good solid noise reduction on the RAW red and RAW blue, before they are used in the filtering, would be a very welcome addition.
Now, when you google 'noise reduction' it's almost inescapable: it needs to be wavelets.
Wavelets?
Yes, wavelets.
Fourier transforms are also possible, but that seems to be yesterday's thing: wavelets are the way to go... (also not very recent mind you, but it's what all the kids rave about: wavelets!).
I had no clue... that's the fun of DNGMonochrome (for me that is). When I started it I had even less of a clue. It forces me to dive deeper and deeper, learning heaps of new stuff.
Pointless really, but it keeps me busy.
After reading up on wavelets, I finally had some vague idea of what they were, but it was still very abstract all.
Then I stumbled onto a piece of programming code, something practical, that represented 'wavelet denoising'... (dcraw, great program, not easy to read the source code, but at least that I'm capable of)...
So I grabbed that code - really not a lot of lines, strange: after reading many thick PDF on the theory of wavelets, one would expect some bulky programming code, but not so... and started to experiment with it.
Insight
I first adapted it to actually show me these mysterious 'wavelets' (or at least their consequences) and then I really started to understand (well, sort of...)
Then it took me a while to get it adapted to work with DNGMonochrome, and when it finally did: bingo.
The Sight of Silence
Impressive noise reduction.
No joke: seriously impressive noise reduction.
In comparison with Lightroom (3.6) one might even claim - on very very close inspection - that the wavelets rule: they seem to deal slightly better with fine detail (although I can't exclude Lightroom also using a wavelet technique: the results are remarkably similar - and when you throw in the detail slider in Lightroom, the difference becomes neglectable).
Getting philosophical
Of course 'good' and 'better' become very relative, because almost none of these differences are visible at 100%. This whole quality thing is really about extreme pixel peeping, under the assumption that if it's better at 800% magnification, it must also be better at 100%. Which in itself is true of course (is it?), but it excludes the notion: can we still see it?
Or on a more philosophical note: is better still better if we can't experience the difference?
Anyway...
Improvements?
After reworking this wavelet code a bit (adding an extra 'detail' setting and getting it more in line with my own programming), and reading some more on wavelets, I saw some concerns in the present code I'd like to address.
For instance: wavelets operate with thresholds. But the basic threshold in the present code is fixed per level and it's not documented where this threshold is coming from. There's suggested ways to determine the threshold dynamically and not set it upfront, so I want to have a look at that.
Also, the number of decomposition levels is fixed (how many wavelets are created in total), where it might be better to have a high number of levels for low ISO photos, and lower levels for high ISO photos. Need to examine that, see if I can detect a difference when changing the number of levels.
And I read about possible optimization steps after the denoising, for getting back lost detail, so I also need to examine that one.
So, enough to do for a next release...
Wednesday, November 21, 2012
Monday, November 19, 2012
DNGMonochrome 0.9.6 beta released
Ok, finally...
It's a release with quite a bit of new stuff, so I do expect bugs. Check back soon.
Here are all the changes...
Bug fixes
Changes
New stuff
Dropped stuff
Extra documentation
More on the new 'dead pixel mapper' and how to work it here.
More on the new white balance slider and what it does here.
Download
You can download the new version of DNGMonochrome here.
It's a release with quite a bit of new stuff, so I do expect bugs. Check back soon.
Here are all the changes...
Bug fixes
- Fixes bug that on rare occasions could lead to a checkerboard pattern on blown out highlights... it's very unlikely that any of your converted photos were affected by this bug
- Fixes bug which could lead to a crash when pressing the Cancel button on a running blue filtered process
Changes
- Changes the strength setting from steps of 25% to steps of 5%
- Changes the filter options to a drop down list
- Changes the file naming... filename will now indicate the type of filter (RAW or RGB), the filter used (color), and the strength setting of the filter
New stuff
- Adds a red, green and blue filter to the filter options
- Adds option to change the white balance of the photo before filtering (on RGB filters only)
- Adds option to use the RGB filters as a gradient in four directions, with options to end the gradient anywhere on the photo
- Adds option to boost the strength of the RGB filters when using the filters as gradient
- Adds dead pixel functionality, to register dead pixels per camera
- Adds quality option to fix registered dead pixels in the monochrome DNG
Dropped stuff
- Due to the fact the working of the new red RGB filter is slightly similar to the working of the RAW red filter on a non-full strength (same for the new RGB blue filter versus the RAW blue filter), the strength setting for the RAW filters was dropped
- Internal 8-bit support was dropped (mainly to make it easier to maintain the code)... effectively this means the M8 DNGs and compressed M9 DNGs are decompressed after loading and the resulting monochrome DNG will be written decompressed - 16 bit (making the monochrome DNG file larger for original 8-bit DNGs)
Extra documentation
More on the new 'dead pixel mapper' and how to work it here.
More on the new white balance slider and what it does here.
Download
You can download the new version of DNGMonochrome here.
Labels:
DNGMonochrome
Saturday, November 10, 2012
Filter fun...
That does sound a little bit geeky...
With all the filters almost finished, I thought it would be nice if you could actually apply them selectively... I call it 'gradient filters'. Not very original, the naming, granted, but it describes the working quite well.
They work a bit like the neutral density filter in Lightroom, albeit with a different user interface and quite a different effect.
Here's an example.
Red filtered gradient on only the top 1/3 of the photo.
Move over the photo with your mouse to see the effect...
Regular monochrome from Leica M9...
Move over the image to see the gradient red filtered monochrome...
As you can see, it brings out the sky a bit, changes the top of the buildings on the right half, but the bottom half of the photo is completely untouched. It turns the rather flat top half of this image into a little bit more vibrant.
Using the neutral density filter in Lightroom to achieve a similar effect won't work. It will darken the blue of the sky, but also the top of the buildings and the white clouds: in stead of bringing out these details, the density filter would just darken them.
This photo was filtered according to this setting in DNGMonochrome:
You can filter from top to bottom, bottom to top, left to right and right to left (especially handy for images that need to be rotated, since DNGMonochrome doesn't do that for you... I follow the lazy Leica way...). You can then select where the gradient needs to end on the photo, by changing it with the slider. You can apply these gradients in combination with all the RGB color filters (red, green and blue), not just with red (and mind you: the 'gray' gradient you see in the image of the window above, represents one of the three color filters... it's to indicate where the effect will be strongest and where it ends... it's not a 'gray' filter).
I think Photoshop might offer similar functionality, but you can't do this in Lightroom (at least not in 3.6). It would require a neutral density filter type option which can selectively color mix.
This will also be in the next release of DNGMonochrome...
With all the filters almost finished, I thought it would be nice if you could actually apply them selectively... I call it 'gradient filters'. Not very original, the naming, granted, but it describes the working quite well.
They work a bit like the neutral density filter in Lightroom, albeit with a different user interface and quite a different effect.
Here's an example.
Red filtered gradient on only the top 1/3 of the photo.
Move over the photo with your mouse to see the effect...
Regular monochrome from Leica M9...
Move over the image to see the gradient red filtered monochrome...
As you can see, it brings out the sky a bit, changes the top of the buildings on the right half, but the bottom half of the photo is completely untouched. It turns the rather flat top half of this image into a little bit more vibrant.
Using the neutral density filter in Lightroom to achieve a similar effect won't work. It will darken the blue of the sky, but also the top of the buildings and the white clouds: in stead of bringing out these details, the density filter would just darken them.
This photo was filtered according to this setting in DNGMonochrome:
You can filter from top to bottom, bottom to top, left to right and right to left (especially handy for images that need to be rotated, since DNGMonochrome doesn't do that for you... I follow the lazy Leica way...). You can then select where the gradient needs to end on the photo, by changing it with the slider. You can apply these gradients in combination with all the RGB color filters (red, green and blue), not just with red (and mind you: the 'gray' gradient you see in the image of the window above, represents one of the three color filters... it's to indicate where the effect will be strongest and where it ends... it's not a 'gray' filter).
I think Photoshop might offer similar functionality, but you can't do this in Lightroom (at least not in 3.6). It would require a neutral density filter type option which can selectively color mix.
This will also be in the next release of DNGMonochrome...
Labels:
DNGMonochrome
Wednesday, November 7, 2012
Delay...
So, why the delay?
Well, not totally happy with the approach so far, I decided to dive a bit deeper into the process of getting to true sRGB. My YUV theory and the formulas do work (although I now think I'm not officially allowed to call it YUV... the approach works because of the relative relationship between green, red and blue on the sensor - a manufacturer specific YUV if you will...), but seeing how every sensor is different, it's hard to tell how 'green' the green result actually is.
And that was bothering me.
So I then diverted to a more official approach, by applying white balance and then using a color matrix.
Initially I used the sensor data DxO has published on their website.
But there seems to be a problem: the relationship between their documented white balance and the color matrix they present is unclear. It's not specified how they get to their numbers (it is according to an ISO standard, but I was unable to find the exact calculations)...
It seems the matrix might have to be different under different lighting conditions or with a different white balance. And you can't simply apply their documented white balance, because white balance is photo specific - either 'auto' by the camera, or set by the user (stored in the ShotAsNeutral tag of the DNG).
It's all kinda vague, and I wasn't thrilled with the results: a kind of weak green filtering and a humongous strong - really over the top - red and blue filtering.
And seeing how it's unclear how to balance it all out, I'm not sure about this one.
On to the third approach, the most complex one: convert the RAW colors through a matrix to a profile connection space (XYZ), and then use a generic sRGB color matrix to convert that one to sRGB. The conversion to XYZ is the most complex one. It takes into account two color matrices, two forward matrices, camera calibration, white balance and white point settings through different illuminant settings, all taken from the camera profile.
All that data is used to produce nine numbers, which are then used to convert the RAW sensor data to the XYZ color space.
Luckily Adobe provided most of the code to accomplish that conversion.
Then the XYZ data is converted to sRGB through another matrix.
This approach also works and gives better results than the DxO approach.
But also here there's a catch: These matrices and the conversion are rather complex and tight. It's very difficult to figure out where and how to apply the strength setting or where to incorporate that infamous black level. There's too many variables involved over too many layers and it's unclear where to tweak what.
I didn't get very far yet adapting this approach to fit what I want to accomplish.
In the end, after this rather exhausting detour of different approaches, I think the best results were with my first attempts, based on the YUV idea.
So I implemented that one, including white balancing, and I'm quite happy with the result.
But as to not throw away my other hard work on the third attempt, I'm contemplating to create another set: now the full sRGB or AdobeRGB. They won't have a strength setting, and you'll be looking at the pure red, green and blue result, fully based on camera profiles. Because here's the rub: the RAW red also contains a little bit of blue and green. And the RAW blue also contains a small amount of red and green. And the present filtering doesn't take that into account. That's why the sRGB filtering on e.g. red is so much stronger: It's the true non-diluted red. Noisy and over the top, because the little bit of blue and little bit of green - still left with the other filters - is now also gone.
This second set won't be in the next release though, let's first present what I have now, which will take a few more days to finish up.
Well, not totally happy with the approach so far, I decided to dive a bit deeper into the process of getting to true sRGB. My YUV theory and the formulas do work (although I now think I'm not officially allowed to call it YUV... the approach works because of the relative relationship between green, red and blue on the sensor - a manufacturer specific YUV if you will...), but seeing how every sensor is different, it's hard to tell how 'green' the green result actually is.
And that was bothering me.
So I then diverted to a more official approach, by applying white balance and then using a color matrix.
Initially I used the sensor data DxO has published on their website.
But there seems to be a problem: the relationship between their documented white balance and the color matrix they present is unclear. It's not specified how they get to their numbers (it is according to an ISO standard, but I was unable to find the exact calculations)...
It seems the matrix might have to be different under different lighting conditions or with a different white balance. And you can't simply apply their documented white balance, because white balance is photo specific - either 'auto' by the camera, or set by the user (stored in the ShotAsNeutral tag of the DNG).
It's all kinda vague, and I wasn't thrilled with the results: a kind of weak green filtering and a humongous strong - really over the top - red and blue filtering.
And seeing how it's unclear how to balance it all out, I'm not sure about this one.
On to the third approach, the most complex one: convert the RAW colors through a matrix to a profile connection space (XYZ), and then use a generic sRGB color matrix to convert that one to sRGB. The conversion to XYZ is the most complex one. It takes into account two color matrices, two forward matrices, camera calibration, white balance and white point settings through different illuminant settings, all taken from the camera profile.
All that data is used to produce nine numbers, which are then used to convert the RAW sensor data to the XYZ color space.
Luckily Adobe provided most of the code to accomplish that conversion.
Then the XYZ data is converted to sRGB through another matrix.
This approach also works and gives better results than the DxO approach.
But also here there's a catch: These matrices and the conversion are rather complex and tight. It's very difficult to figure out where and how to apply the strength setting or where to incorporate that infamous black level. There's too many variables involved over too many layers and it's unclear where to tweak what.
I didn't get very far yet adapting this approach to fit what I want to accomplish.
In the end, after this rather exhausting detour of different approaches, I think the best results were with my first attempts, based on the YUV idea.
So I implemented that one, including white balancing, and I'm quite happy with the result.
But as to not throw away my other hard work on the third attempt, I'm contemplating to create another set: now the full sRGB or AdobeRGB. They won't have a strength setting, and you'll be looking at the pure red, green and blue result, fully based on camera profiles. Because here's the rub: the RAW red also contains a little bit of blue and green. And the RAW blue also contains a small amount of red and green. And the present filtering doesn't take that into account. That's why the sRGB filtering on e.g. red is so much stronger: It's the true non-diluted red. Noisy and over the top, because the little bit of blue and little bit of green - still left with the other filters - is now also gone.
This second set won't be in the next release though, let's first present what I have now, which will take a few more days to finish up.
Labels:
DNGMonochrome
Subscribe to:
Posts (Atom)