SILC Showcase

Showcase January 2012: Visual processing of spatial relationships

Share this article using our url:

Visual processing of spatial relationships

Steven Franconeri, Stacey Parrott and David H. Uttal, Northwestern University


Related research:

  • Michal, A. L., Uttal, D. H., & Franconeri, S. L. (2014, July). The order of attentional shifts determines what visual relations we extract. Spatial Intelligence and Learning Center (SILC) [Northwestern University]. [URL]

The paragraph titled:

  • "Relational processing in graphs" was up-dated on February 12, 2016.

Figure 1
Both STEM education and broader scientific reasoning rely on spatially depicted relations. These depictions include bar graphs, line graphs, histograms, cladograms, timelines, chemical models, flowcharts, maps, and mechanical drawings (see Figure 1 for examples). Spatial depictions can be an extremely efficient way to present information (e.g., Tversky, 2005), but many students struggle to understand them (e.g., Kozhevnikov, Motes & Hegarty, 2007; Shah and Carpenter, 1995). Figure 1 text

To find out why some students struggle, and find inspiration for measures that could improve their performance, we must seek to understand how the human visual system processes spatial relations among objects. Past work reveals many aspects of this process, including the types of reference frames that can underlie these relations, or how people deal with real or imagined transformations to relations (e.g. a change in viewpoint) (Shelton & McNamara, 2001). Beyond such themes, it is striking how little we know about the mechanism used by our visual system to process spatial relationships. For each of these depictions in Figure 1, we have little idea how the visual system represents arbitrary spatial relationships among the several objects or parts. But our understanding is actually far more impoverished. We have little idea how we flexibly represent the relations between just two objects. Before we can approach the larger puzzle of how structure is represented among several objects, we must first solve this more basic problem so that we know the basic units that comprise that structure.

We have started to answer this question by assembling a taxonomy of mechanisms that the visual system might use to process spatial relations. We have divided these possibilities into two major classes that differ according to how the objects in a relation are attended (Franconeri, Scimeca, Roth, Helseth, & Kahn, in press, Cognition). One class requires that we attend simultaneously across objects, and the other requires that attention shift to at least one object over time. We argue for the existence of this novel latter class, where we must isolate one object at a time. This step is needed to solve a series of well-known problems in vision, involving matching object identities to their locations.

As a practical example, when you see your salt and pepper shakers on your dining table, you feel that you can pay attention to both objects simultaneously, and still know which is on the left or right. We predict that this percept is an illusion. Instead, in order to know that the salt is on the left, you need to shift your attention exclusively to the salt.


Evidence for attentional shifts during spatial relationships judgments
Figure 2

To demonstrate that people shift their attention within the simplest spatial relationship judgments, we use eyetracking (Kahn & Franconeri, in preparation), an electrophysiological "attention tracker" (Franconeri, Scimeca, Roth, Helseth, & Kahn, in press; in preparation), and other behavioral techniques (Choo & Franconeri, in preparation; Roth & Franconeri, submitted) to track movements of attention. Across these studies, we show that during simple judgments of spatial relationships between just two objects (e.g. left-right decisions), people reliably shift attention toward one object (strategies vary across tasks, but one example is a tendency to shift to the left, as we do when reading a new line from a book).

Figure 2a shows an example using eyetracking. During spatial relationship judgment between two objects (participants were asked to encode the left-right relationship of a red and green circle), participants reliably looked at the left object despite the ease of the task. Figure 2b shows an identical pattern using our electrophysiological "attention tracker". Even when participants keep their eyes perfectly still (confirmed by an eyetracker), during a simple spatial relationship judgment they still systematically shift attention to one of the objects (it's not the left one for this case, for methodological reasons we needed to seek different types of systematic shifts, see Franconeri et al., in press for details).

In each case, participants shifted their eyes or attention during spatial relationship judgments. But perhaps they'd do this for any judgment. As a control, we tested a second judgment that also required inspecting both objects, but removed the spatial component of the relation. In the 'identity relation' control, participants indicated whether the object colors were the same or different. Figure 1 shows that eliminating the spatial component significantly reduced the shift. In the eyetracking case of Figure 1a, there was still some evidence of a shift, but in this experiment we did not actively discourage participants from incidentally encoding the spatial relation, even if it did not help their performance. For the "attention tracker" case of Figure 1b, we added new incentives for observers to ignore the spatial relation in the identity relation condition, and the shifts completely disappeared.


Relational processing in graphs [This paragraph was up-dated on February 12, 2016.]
We have begun to extend our hypotheses and methods to the perception of bar graphs to explore how attentional shifts facilitate data reasoning (Michal, Parrot & Franconeri, in preparation). Figure 1c shows an eyetracking study where participants either judged a directional size relation within a 2-bar graph ("Does the graph depict [oO] or [Oo]?"), or non-directional size relation ("Are the bars of equal size or not?"). In Experiment 1, when asked to judge the directional relation, participants systematically isolated one bar with their attention. In contrast, when asked to judge a nondirectional relation, participants systematically shifted their gaze toward the right. A rightward shift would place the stimuli in the left visual field, allowing the bars to be processed more holistically by the right hemisphere. In a second experiment, we encouraged one group of participants (holistic group) to extract relations by imagining a line connecting the tops of the two bars (directional: ‘Is the line sloped positively or negatively?’; nondirectional: ‘Is the line flat or sloped?’). As a control, we included an individuated group of participants who used the same perceptual framing as Experiment 1. The holistic framing changed participants’ eye movements such that directional relations were extracted more similarly to nondirectional relations than directional relations in the individuated group. Together, these results show that people can be induced to extract size relations locally or holistically for the same graph display, either by manipulating the type of judgment (directional/nondirectional) or the perceptual framing (individuated/holistic).
Figure 3


We predict that shifting attention is a critical part of constructing visual representations of spatial relations, and that the way that these shifts of attention unfold over time will critically influence understandings of the relations within both simple and complex graphs. Our new work along this line extends to children and to the development of computer-based interventions to facilitate the development of relational processing in graphs, seeking new ways to teach students the ‘perceptual’ skills for interpreting graphically presented information.

Figure 3 text



  • Franconeri, S. L., Scimeca, J. M., Roth, J. C., Helseth, S. A., & Kahn, L. (2012). Flexible visual processing of spatial relationships. Cognition, 122(2), 210-227.
  • Kozhevnikov, M., Motes, M., Hegarty, M. (2007) Spatial visualization in physics problem solving. Cognitive Science, 31, 549-579.
  • Shah, P., & Carpenter, P. A. (1995). Conceptual limitations in comprehending line graphs. Journal of Experimental Psychology: General, 124, 43-61.
  • Shelton, A. L. & McNamara, T. P. (2001). Systems of spatial reference in human memory. Cognitive Psychology, 43(4), 274-310.
  • Tversky, B. (2005). Visuospatial reasoning. In K. Holyoak and R. Morrison, (Eds), Handbook of Reasoniong (pp 209-249). Cambridge: Cambridge University Press.
You are here: SILC Home Page SILC Showcase Showcase January 2012: Visual processing of spatial relationships