Skip to main content

The Space around Us

Emilie Josephs studies how the brain processes small-scale, reachable spaces in which we perform most of our everyday tasks.

Many graduate students are able to pinpoint a moment that forever defined their academic trajectory. For Emilie Josephs, a PhD candidate in psychology, that moment happened at Boston College when she took an elective class on vision as an undergraduate. In the class, she learned that when we look at something, the visual system takes in a simplified version of what we see and the brain reconstructs what is actually there. “It was fascinating,” she says. “Our visual experience doesn’t feel reconstructed. It feels complete, unaltered. It’s amazing how we have such a rich experience of what we are seeing, given these complex processes.”

Within Reach

Josephs became interested in scene perception—how we perceive a room or environment, the objects in the room, and how they connect with one another, for example—as a member of BC’s Vision and Cognition Lab. But her specialized interest in task-relevant, reachable environments began while she worked as a research assistant at the Visual Attention Lab at Brigham and Women’s Hospital.

“I was co-writing a paper with a postdoctoral fellow on how we search for objects in a large-scale scene,” she says. “As I was writing, I remember this moment where I looked at my desk and wondered if the same processes in the brain applied if the scale was smaller and reachable.”

That moment set into motion Josephs’ research into how the brain processes the small-scale, reachable spaces in which we perform most of our everyday tasks, for example, working on a computer, using cutlery, or painting a picture. She terms these task-relevant, reachable environments “reachspaces” and contrasts them to large-scale, navigable spaces called scenes. “Reachspaces have distinctive behavioral importance, which is task-relevance,” she observes. “This is the view we have while using our hands to interact with the world.”

Nuancing the Field of Scene Perception

Josephs explains that researchers studying scene perception have tended to focus on how the brain processes the visual input corresponding to large-scale spaces, such as rooms or outdoor environments. Less work has been done on how closer-scale settings are processed, such as the spatial layout of objects placed around someone working at a desk.

“Maybe it’s because of language?” Josephs muses. “When we talk about how we process ‘environments,’ we tend to think about larger spaces.” She explains that when researchers began conducting brain-mapping experiments, they mostly looked at how the brain processes objects, faces, and spaces. The questions generated from these early studies were so rich and intricate that no one really accounted for whether the spaces in question were close at hand or far away.

“Historically, the field has assumed that brain processes representing closer-scale, reachable environments are the same as those representing larger-scale, navigable ones,” she says. “My work challenges this.”

Josephs currently works with Talia Konkle in Harvard’s Cognitive and Neural Organization Lab, affectionately known as Konklab. The lab aims to characterize representational spaces of the mind and how they are mapped onto the surface of the brain. At the lab, Josephs addresses a number of questions in the process: How does the brain process reachspaces and scenes differently? How different are the statistical regularities and image-computable statistics in their visual features? How do high-level meaning and semantic associations differ between the two? And how does this affect things like attention and memory? “We explore these questions using behavioral and computational methods,” she says.

In one of Josephs’ experiments, participants in a brain scanner are shown two kinds of pictures: reachspaces, such as a kitchen countertop with a cutting board two to three feet away, and scenes, such as a full kitchen. “In the parts of the brain that are visual, we search for regions that prefer reachspaces to large-scale scenes,” she explains. “From our results to date, we get the sense that the brain is not using the same mechanisms to represent both.” Josephs is also testing the theory that, in contrast to pictures of a scene, pictures of reachspaces have stronger associations to words that describe actions performed by hand, such as “chopping” or “stitching.”

Learning to See Again

Josephs hopes that her research will be useful for those who have lost their sight. Retinal prosthetics, for example, contain light-sensitive surfaces that interface with the brain to explain where light is coming from. If the information programmed into these prosthetics is only helpful in processing large-scale environments, they may not help with reaching actions on a smaller-scale. Josephs’ findings in her work on reachspaces may be useful in enhancing current designs.

Her research may also inform the design of sensory substitution prosthetics, where grids of electrodes attached to a patient’s back or tongue can produce touch sensations corresponding to what they see. Such prosthetics give the patient the possibility of learning how to visualize the world through another part of the body. “The work done on retinal prosthetics and sensory substitution prosthetics have a long way to go,” she says. “However, I hope my work might inform designs that can be adjusted based on what kinds of view a patient is experiencing at any point in time.

Josephs acknowledges that a great deal remains to be learned and that her work on reachspaces doesn’t provide a complete understanding of what the brain is doing. “I’m not saying that how the brain processes near and far environments are completely different,” she explains. “I’m saying we should have a more nuanced view of how we represent space in general. It’s so fun how interesting this work has turned out to be.”

Photos by Molly Akin

Harvard Griffin GSAS Newsletter and Podcast

Get the Latest Updates

Subscribe to Colloquy Podcast

Conversations with scholars and thinkers from Harvard's PhD community
Apple Podcasts Spotify
Simplecast Stitcher

Connect with us