Tag Archives: iPhone

Pocket Computational Photography

If you’ve followed this blog for a while, you may have seen my earlier posts on computational photography.  If not, you can review them at this link:  http://edrosack.com/?s=computational+photography.  The term refers to using software algorithms to supplement or replace optical capture processes.  Common examples are multi-frame panoramas, focus stacking, HDR processing, post capture focus, and other techniques.  You can read more about it at this link on Wikipedia:  https://en.wikipedia.org/wiki/Computational_photography

As phone capabilities increase, their computational photography power is growing.  Camera phones have long been able to do on the fly panorama and HDR capture.  And here’s an example of a new capability that arrived on the iPhone 7+.

BokehBokeh

Apple calls this “Portrait Mode”.  It’s available in Beta on the iPhone 7+ in the latest version of IOS.  Since the 7+ has two cameras separated by a small distance, it provides the info necessary to compute a “depth map” of pixels in the frame.  The software uses this to selectively blur pixels based on distance to add a “Bokeh” (shallow depth of field) effect that helps with subject isolation.  For comparison, here is the non-computed version of the image.  You can see that the background looks very different.

Original
Original

All isn’t perfect.  The algorithm has problems around small features at the boundaries.   Look closely at the next frame and you can see blurring issues at the edges of the reed.

Phone output
Phone output

The processing blurred parts of the reed that we wanted sharp.  For the first photo above – I cheated and used Photoshop to correct the problems.  Maybe in future versions the software will be better.

Here’s one more example.  This is Lynn, rocking an election day t-shirt.  First, the portrait mode version.

Lynn - original
Lynn – portrait mode

And finally, the original.  In this case, the software did much better, with no obvious blurring issues.  These two are straight out of the camera with no processing on my part.

Lynn - portrait mode
Lynn – original

It’s fascinating how photography and computers are merging.  For someone who started out programming a large room sized Univac in FORTRAN with punch cards, the power and ability that fits in my pocket is just stunning.  I’m glad to have it with me.

What can they possibly think of next?  Do you use computational photography techniques?  Do you like or hate them?

Thanks for stopping by and reading my blog. Now – go compute some images!

©2016, Ed Rosack. All rights reserved

iPhone vs. “big” cameras?

When I’m traveling, I try to take an iPhone photo when I get to a new place.  Sometimes I forget but when I can remember, the iPhone’s GPS capability records the location for me.  Then when I’m back home, it makes it easier to map out exactly where I’ve been.

This is one of the first photo’s I made on our trip out to Utah a few weeks ago:

Cedar Breaks National Monument amphitheaterCedar Breaks National Monument amphitheater iPhone panorama

 When I posted it on Flickr, I commented “Straight out of the iPhone’s panorama mode. I’m not sure why I have all these other cameras.”  And I do like the photo.  Phone cameras do pretty well, especially in good light.  So I wondered …

When I got home and processed the rest of my photos, I took a look at some of the other iPhone images compared with similar images from my “big” cameras (interchangeable lens cameras with larger sensors).  Here’s another example:

Sunrise at Point Supreme, iPhone PanoramaSunrise at Point Supreme, iPhone version – Panorama mode

Although the light was very pretty that morning, it was also very challenging for the iPhone sensor and lens.    I’ve tried to adjust the photo to be as similar as possible to the one below.  But I can still see major differences.  I made the next photo a minute or so later and very near the same spot with an Olympus E-M5 II micro four-thirds camera and the 12 – 40 mm f/2.8 Pro zoom lens.

Sunrise at Point Supreme
Sunrise at Point Supreme – Olympus version – multi image panorama

After looking at several cases where I had similar photos, I think this example shows why we need to keep our big cameras.

  • The exposure latitude and dynamic range capability of sensors that are larger than the one in the iPhone means that the dark areas have more detail and less noise, and the bright areas are less likely to blow out.  For high contrast light (sunrise / sunset) this helps a lot.
  • The lens in the iPhone didn’t handle the flare / glare very well.
  • The resolution capabilities of phone cameras are growing.  But with careful capture, I can create much larger images with the big cameras.  For instance the last photo above is 58 megapixel. The amount of detail in a file that large is enormous compared to a phone photo.
  • Control:  For me, the big cameras beat phone cameras in flexibility / control and ergonomics.  I can easily control everything from lens choice to aperture, ISO, shutter speed, etc.  You can get apps for your phone that add better controls, but I find them inconvenient and don’t often use the ones I have.
  • Color / white balance:  The default color and white balance on the phone are very good.  But when I use the big cameras, I can shoot in RAW format, which makes adjusting white balance and color much easier in post processing.  RAW format also allows more adjustment latitude, since I’m working with a 14 bit file in RAW, instead of an 8 bit jpg file.  RAW is coming to the iPhone soon, which should help.

So there are some reasons why I think big cameras are worth the extra weight / trouble of bringing them along.  I use my phone camera to supplement them.  How about you?

Thanks for stopping by and reading my blog. Now – go make some photos!

©2016, Ed Rosack. All rights reserved.