Today’s reading is “The Influence of Head Contour and Nose Angle on the Perception of Eye-Gaze Direction” by Stephen Langton, Helen Honeyman, and Emma Tessler, of University of Stirling (Perception and Psychophysics, 2004).
We’re exceptionally good at telling where someone is looking, and we’re even better at determining whether they’re looking at us. We can quickly spot someone looking straight at us in a crowd of others who are looking away (I hereby dub this the “singles club effect”), and we can discern gaze deviations of as little as 1.4 degrees in people sitting close by.
How do we make such fine distinctions? Are we looking at the eyes themselves? Some specific part of the eyes? Or are we considering the face or head as a whole? William Wollaston illustrated as early as 1824 that the eyes don’t tell the whole story. Consider the following illustration of two sets of eyes:
The eyes are virtually identical, and it’s clear that they are looking in the same direction. (I’ve seen this illustration in several different places, all of which claim the eyes are “identical.” They’re not, but it’s unclear whether it’s due to 19-century drawing technology or 21st-century reproduction technology.) Now look at the same eyes, but with the lower halves of their accompanying faces drawn in:
All of a sudden, the face on the left is looking past us, to the right, while the face on the right appears to be gazing directly at us. Clearly we’re not just paying attention to the eyes, so what are we looking at? Langton et al. designed a set of clever experiments to answer that very question. They created modern photographic versions of the Wollaston drawings, and using Photoshop, made pictures with the same eyes, but different head orientations. Not surprisingly, they found statistical evidence of the phenomenon you just witnessed above. It even worked when the photos were shown upside down.
Then they went on and made silhouettes that eliminated all facial features except the eyes. Again, the same results were observed: we pay more attention to the orientation (in this case, merely the shape) of the head than we do to the eyes.
Finally, they returned to Photoshop and, using head-on photos, changed only the orientation of the nose. They duplicated the effect, but the bias was not as strong as in even the silhouetted version of the experiment. When the pictures were turned upside down, the effect disappeared entirely. Langton et al. hypothesize that face-shape processing might be a lower-level ability than nose-direction processing. Infants, it has been found, pay attention to head orientation at an age of 3-6 months, but it isn’t until 14-18 months that they pay attention to movement in the eyes alone.
What does this all mean? Well, careful poker players would probably be better to examine their opponents’ cards without moving their heads. And think of the “shifty-eyed villain.” How can he fool anyone, always looking askance? He can because we’re not paying attention to his eye movements: just his face. If it were possible, even a shifty-nosed villain would be able to fool more people than a shifty-headed villain. In more mundane applications, Langton et al. suggest that we use this facility in conversation as well: notice how the speaker in a conversation never looks directly at the listener, but always seems to know whether she’s paying attention? He’s “fooling” her by turning his head away, but not his eyes.
The lack of facial “connection” is part of why videophone calls are so unsatisfying. Perhaps videophones will take off only when they artificially rearrange our faces to simulate the way we talk face-to-face.