my flickr photostream

Tuesday, September 7, 2010

The Human Eye - way more than a camera


© 2008 Simon Hucko

When talking about photography, the comparison between a camera and the human eye inevitably gets discussed. Lately this discussion usually centers around dynamic range, and getting a scene to look more "natural" through HDR imaging. Sure, we're able to see a lot more detail than our camera sensors, but there's a lot of half-truths and misinformation floating around out there.

Let's start with some basic anatomy (mostly referenced from wikipedia). The human eye is indeed set up somewhat like a camera. We have a lens, an aperture (your pupil, which is actually located in front of the lens), and a sensor (optic disk). The optic disk actually has two types of light sensing receptors, rods and cones. Rods cannot perceive color but are very sensitive to light and give us our low light and peripheral vision. Cones are the color receptors and make up what we think of as our normal vision. Our pupils have an aperture range of roughly f/2-f/8. We have an approximately 170 degree wide by 120 degree vertical field of view (which is huge, like fisheye huge!), but the actual sweet spot of your vision is much smaller (sorry, don't have a number for you). As far as resolution goes, I've seen numbers all over the board. You have about 5 million cones in your eye, so you get 5MP of color resolution. But then you have another 100 million rods that give b/w contrast, which adds another 100MP. On top of that, we have two eyes, and we're able to sweep them around the scene and soak up more detail, so some people estimate our resolution around 500MP or more! But I'm not sure our vision is good enough to actually resolve all this (worse for some than others), and the dynamic aspect of our vision and the crazy processing that goes on in our brain makes resolution somewhat pointless to talk about. Suffice to say that what we resolve and what we actually need to see an image are two different things.

On to the dynamic range can of worms. Your eyes have a static contrast ratio of about 100:1, or about 6 1/2 stops. Seems kinda low, doesn't it? Static contrast ratio means what you're able to see without any adjustments. Add in allowances for pupil dilation and chemical adjustment of your photo receptors (sort of like changing the ISO on your camera) and you get to a dynamic contrast ratio of 1,000,000:1, or about 20 stops. (Note, dynamic contrast is a term often thrown around by TV manufacturers, too, and is a bit of handwaving to get themselves a bigger number. Shop by static contrast ratio whenever possible). So, when scanning around a scene, you're able to see those shadow details and the bright highlights because your eyes are adjusting as you go. Try looking at something bright for a second and then quickly turning to something dark and you'll notice that it takes a bit for you to adjust and fully see what's happening (think driving into a tunnel or walking out into a sunny day from a relatively dark building). Therefore, your camera isn't really doing that bad a job (most digital cameras capture 5-9 stops of light), it just doesn't have the ability to adjust across a scene the way our eyes do.

So what do we do? Bracketing shots for HDR makes sense, because we're doing the same thing our eyes do - adjusting the sensitivity to resolve detail in different parts of the scene. What doesn't make sense is viewing these images at 500p on the web. No wonder HDR looks so fake, we don't actually see all 20 stops of information at once normally. For HDR to work, I think you'd have to view it at a large enough size that you can't see the whole image at once. Actually, for HDR to truly be effective, we need to replicate the scene in the way that our eyes see it - a large image of around 6 stops of dynamic range that adjusts which 6 stops it shows you based on where your eyes are. Sounds kind of sci-fi, but retina tracking is very real, and I imagine something like this isn't out of the question (although not easy and certainly not something that's going to be built into every display out there). That or we need a display with a bright enough light source and an actual contrast ratio of 20 stops so that our eyes can do what they do best. That might be a more reasonable approach.

HDR aside, there are other things that our eyes and brain do naturally which influence photography and post processing. Let's start with vignetting. Why does a vignette on a photo work? Because our vision is the same way. Take a second to examine your peripheral vision. It's a bit darker, kind of oval shaped, right? So looking at a photo with a bit of vignetting gives us the impression of standing there looking at the scene. (Note that I said a bit of vignetting, you can definitely go overboard here.) You can take it a step farther and slightly desaturate and blur the edges of the photo along with the vignette, giving a much more true-to-life view. This is partially why I love larger format film photos taken with older cameras, they seem to produce this effect naturally and wonderfully.

Black and white works because we have so many more receptors for light and contrast than we do color, so we're visually attuned to that anyway. In fact, when you're stumbling around in a dark room, you're getting very little color information from your eyes. So, even though black and white was initially a chemical and technological limitation, it also relates to how we see the world on some level.

Color temperature and white balance are such a pain because we don't normally think about it. Our brains are incredible at adjusting and adapting to color temperature, and even mixed lighting can look normal and right in our mind. However, if you've ever tried to photograph a room with fluorescent, tungsten and daylight you've probably gone insane trying to balance them all out. I'm not sure why our brains don't balance this information in an image like they do with real life, possibly an image on a screen or wall is lacking the context that we get in real life and the references for color as they pass through different lighting. The important thing here is to learn to see these color differences before pressing the shutter so that you can adjust for them (gels, custom white balance, turning off the fluorescents all together, etc).

Finally, your brain is an incredible processing engine, and it constantly lies to you. There's a difference between what your eyes see and what your brain sees - a lot of things get processed out and ignored (because if the 500MP guys are right, that's just way too much information to handle on a constant streaming basis). You're physically attuned to filter out distractions and familiar objects, focusing on what's changing or where your interest lies. This filter never shuts off, so when you're composing a shot through your viewfinder it's easy to overlook things like branches, telephone wires, and other distractions that will later become glaringly obvious in the photo. With some practice it's possible to turn this filter off and really see what's there in front of you, which will help greatly in processing. (Let's face it, cloning stuff out that you could have fixed on site sucks.) This is also why some images just don't come out as strongly as you hoped - all of the context that you had when you were there and taking the photo doesn't get represented in the image, just whatever was in the frame. It's up to you to take that frame and place it around the important elements of a scene, capturing what you feel in a small 2D slice of time. Sometimes that's not even possible, and recognizing what won't work as a photo is part of the battle. Hey, no one ever said photography was easy.

I hope this was somewhat informative and that it gives some insight into why your eyes and your camera see the world differently. Spending a few minutes thinking about the difference can save you from a lot of trial and error with your camera. And please, for the love of god, no more sloppy, tiny, halo ridden, gray HDR - there's nothing natural about it.

Note: most of the information in this article comes from wikipedia, other random googling, and my own reasoning and experiences. If I missed something or got something wrong, please let me know in the comments and I'll correct it.

~S

[title of blog] on flickr

5 comments:

  1. Awesome discussion! If we're to be visual artists, we should start with a working knowledge of our eyes and the difference between observing and seeing. Sweet!

    ReplyDelete
  2. @ Matt - thanks, I thought it would be pertinent. It always amazes me how "wrong" my brain is about what I'm looking at.

    ReplyDelete
  3. Simon - finally got around to reading this most interesting post, which prompted me to do some of my own web searching. I found this interesting demonstration of the "blind spot" in our eyes, where the optic nerve interrupts the rods and cones. Although you see nothing in this spot, our brain 'fills in' what it 'thinks' should be there - and doesn't always get it right. Check it out yourself: http://www.blindspottest.com/

    ReplyDelete
  4. Here's another example of how your brain 'creates' vision for you: stand about one foot away from a mirror and look at your smiling face. Now, shift your eyes from side-to-side while looking at your eyes in the mirror. Your brain won't show your eyes moving - just staring straight back at you. I wonder how much 'reality' our brain's make up for us every day?

    ReplyDelete
  5. @ Dan - crazy stuff, thanks for the link!

    ReplyDelete