Have you heard about Adobe’s recent update to Lightroom? It has a new feature called “Enhance Details”. Adobe says it:
“approaches demosaicing in a new way to better resolve fine details and fix issues like false colors and zippering. Enhance Details uses machine learning—an extensively trained convolutional neural network (CNN)—to provide state-of-the-art quality for the images that really matter.”
You can read an explanation of what they’ve done on their blog at this link: https://theblog.adobe.com/enhance-details/. It sounds like a another fascinating advance in computational photography. It’s also a great example of why you should shoot in Raw mode and save your original files – so you can take advantage of future software updates. Of course I had to try this out!
Wildflower and bug – processed in Lightroom with Enhanced Details (click for a larger view)
I chose this flower growing in Central Winds Park, near Lake Jesup as my subject. By the way, this is the same spot and subject as this 2015 blog post. There’s a lot of detail in the flower and insect and I was curious about how it would look using the new processing.
I ran Enhance Details on the Raw file. At first, I couldn’t really see any improvement. So I opened the original and enhanced images in layers in Photoshop. I set the layer mode to Difference and then used a levels adjustment to highlight changes.
Using this method at 300% magnification to guide me to where the changes were, I could then see them clearly. The enhanced image was indeed more detailed than the original. But (for this example anyway) they’re extremely subtle! Too subtle to show up in a blog resolution image without a difference map.
Adobe claims a 30% increase in image quality. I’m not sure how they derived this number, but from the examples I’ve seen the results are much more subtle than that.
It works better on some subject than others, e.g. night photos of cities with lights, or images with artifacts. Improvements are much harder to see on other subjects such as my flower.
I didn’t see (and haven’t heard of) anyplace where it made an image worse.
You pay a penalty in workflow, time, and disk storage when using this. It shouldn’t be your default processing.
Consider it for portfolio images, or photos that you’re printing in a large format. Don’t bother with it for images shared to the web or ones that you’re printing small. Keep your Raw files and you can always go back later and run them through.
If you use Fuji cameras, try it on their X-Trans Raw files.
We’ll hear more soon as the photo community explores this and we see results.
In the future this or something like it will probably become the default demosaicing approach. Adobe should be commended and I hope they keep developing it
That was fun! Thanks for stopping by and reading my blog. Now – go make some photos!
Sometimes after a photo shoot, I’ll skip over images if I’m short on time or something looks too hard to deal with. Other times, I may play with a photo for a while and then set it aside when I just can’t seem to get it right. When I learn a new technique or get a new software package or upgrade I try to go through my image library and pick out existing photos that could benefit from the new capability. And yes, I also notice images that no longer look as good to me as they did at first. Something I did a few years ago may have seemed great then – but tastes change.
I use Lightroom to catalog my photos and I have a keyword called “Process” with three sub-keywords “Color”, “pano”, and “other”. Using these, I mark photos I want to revisit and I’ve built up a collection of them for future processing. I had a little time this week to go through and pick three to work on:
Kelly Park Reflections: Merritt Island, Florida, February 19, 2013. The water was amazingly calm that morning and I like the reflections as well as the detail / lights on the horizon. I bypassed this image at first because of trouble with the white balance. This time through the result is much closer to the look I wanted.
The Main Sanctuary of the Cathedral Basilica, Saint Augustine, Florida, February 28, 2013. Black and white infrared. I don’t remember why I didn’t finish this photo back in February. I like the light, detail, and tonality.
Three more cypress trees: Blue Cypress Lake, near Fellsmere, Florida, June 2, 2012. False color infrared. Since IR doesn’t capture color as your eye sees it, color conversions are very subjective. As I gain experience, my tastes are changing. This version is very different from how I processed other IR photos at the time.
So, some recommendations:
If you’re struggling with an image, don’t delete it. Mark it and move on. Come back and revisit it later.
Organize, document, and keyword your images so you can find hidden gems to re-process.
Review your photo library occasionally. Your photography skills and tools aren’t static. So your portfolio shouldn’t be static either. Revise older images and make them better. You might be surprised what comes out of your archives.
Thanks for stopping by and reading my blog. Now – go revise some photos!
I had a question recently about how I process panoramas – so I thought I’d document my workflow using a recent image as an example. This will be a bit geeky. Next week I should have a more normal post after I finish selecting / editing photos from a visit to St. Augustine.
I was up on Mount Evans near Denver, Colorado with an Olympus E-PL5 camera and a 24-100mm equivalent lens. This is a 16MP camera and the mountains and valley were just too large to fit through that lens and onto that sensor. I really wanted to capture something that would give viewers a sense of the scene. So how did I make a 46MP (9608×4804) wide-angle panorama with the gear I had? Read on.
Mount Evans panorama – the completed 46MP image (click to see larger on Flickr)
This is a multi-photo panorama. Many cameras have a panorama capability built-in. I’m not sure if the E-PL5 has it, because I never use it. Why? I like the flexibility, control, and quality I can achieve with a manual process. I don’t like letting the camera decide everything automatically. And I like the result – huge, rich files that I can print large, or even crop to yield several different compositions.
In this post, I’ll write about using the software that I have (Lightroom V5 and Photoshop CS6) but the concepts are similar no matter what software you have. You’ll need to interpret / apply this info to your own tools and workflow. There’s four main phases: 1) Capture, 2) Initial Adjustments in Lightroom, 3) Photoshop stitching and processing, and 4) Final Lightroom tweaks. I’ll give you some hints about each.
Carefully capturing the input frames is extremely important to the end result. Input variations can be hard for software to handle, so try to minimize differences. You should use manual white balance, exposure, and focus.
For horizontal panoramas, shoot vertical frames and overlap them by thirty to fifty percent but not more. Too many frames means more seams, and this could add problems you’ll have to fix. If this happens, try removing a frame – it might not impact the final image.
I shoot either on my tripod or handheld. If there’s enough light, I’ve had good luck shooting handheld. I’m careful to keep the camera level and use a grid line or focus mark in the view finder aligned with the horizon as a guide.
I always shoot in RAW format, but the stitching will of course work with JPG input frames. It’s best to use a camera / lens supported by Lightroom so that you can correct lens distortions. If you don’t, the distortions can build up across the stitched images and look especially bad when there are straight lines in the scene.
Initial Adjustments in Lightroom:
I load the images into Lightroom and adjust them all identically. I aim for a neutral, low contrast setting across all images. I enable distortion correction and usually turn off sharpening / noise reduction (and deal with them in later steps).
Be conservative with the highlights – I’ve found that stitch software may blow out parts of the image when attempting to blend between frames. I’ll dial down highlights if I have any concern. If I didn’t use manual exposure, I may also try to match white and black points in all the histograms.
The seven source images in Lightroom after the initial adjustments
Photoshop stitching and processing:
Once I’ve got the frames adjusted in Lightroom, I open them as layers in Photoshop. This allows me to try different auto align algorithms (under the Edit / Auto-Align menu), undo them, and try again if there are issues. For the wide-angle shots I usually make, the cylindrical alignment method seems to work best. I check the result at 100%. Sometimes the software doesn’t line up the most important parts of the image perfectly and I’ll use the move tool to make small adjustments.
Seven source images opened as layers in Photoshop and auto aligned.
Next I’ll do the blending (Edit / Auto-Blend). Then I look for variations across the image (most often in smooth sky). You can see the leftmost frame above is a bit darker. If the auto blend hasn’t worked well enough, I’ll undo it and tweak the levels or curves in each layer and then re-blend.
Once I’m happy with the blend, I’ll flatten the layers, and then rotate and crop the image. I don’t do final cropping at this point – I save that for the later in Lightroom. It’s OK to leave a bit of white around the edges. In CS6, content aware fill can fix those for you. If you do use Content Aware Fill, review those areas at 100% for flaws. You might need to touch them up with the clone tool. This is also the time to do any other cloning the image needs.
In Photoshop after auto blend, merge layers and initial crop / straighten
Now do your noise reduction on a new layer. I use Topaz DeNoise 5, but other software works well too. I just like the user interface in this plug-in. Check the result at 100% again and decide whether to apply it to the whole image or selectively. Most of the time I add a layer mask to the noise layer and apply it to the sky and / or smoother parts of the image only. This preserves detail where the noise isn’t obvious (ground, trees, etc.).
Final Photoshop edit after content aware fill, noise reduction on the sky and a dose of Topaz Clarity
I’ll then merge the layers (shift-alt-command-E) and play around with various filters (Nik Color Effects Pro or HDR Efx, Topaz Clarity, etc.) to get to something close to what I want. Then I return to Lightroom.
Final Lightroom tweaks:
Final steps in Lightroom are sharpening, any tweaks to white balance, exposure, white and black points, cropping, etc.
Output in Lightroom after final adjustments (White Balance, exposure, sharpening, cropping, etc)
This workflow takes time. Is every scene worth all this? Nope – I only go through it if I think the final image will be worth it. Even so, sometimes I’ll start the process and stop when I realize that the composition didn’t turn out. You’ll have to decide whether it’s worth the time and effort to you.
I hope I’ve given you some insight. Try it yourself and please let me know how it turns out. Even if you don’t go through the whole thing, some of the info might be useful. I’d be happy to answer your questions. The best place to ask them is in the comments for this post so they’ll help others.
Thanks for stopping by and reading my blog. Now – go make some really big photos!