Skip to main content

Johns Hopkins UniversityEst. 1876

America’s First Research University

A black and white visual anagram of a butterfly and a bear in gif form

Credit: Johns Hopkins University

Brain science

Seeing double: Clever images open doors for brain research

New 'visual anagrams' expand ability to test human perception

Name
Jill Rosen
Email
jrosen@jhu.edu
Office phone
443-997-9906
Cell phone
443-547-8805

New artificial intelligence-generated images that appear to be one thing, but something else entirely when rotated, are helping scientists test the human mind.

The work by Johns Hopkins University perception researchers addresses a longstanding need for uniform stimuli to rigorously study how people mentally process visual information.

"These images are really important because we can use them to study all sorts of effects that scientists previously thought were nearly impossible to study in isolation—everything from size to animacy to emotion," said first author Tal Boger, a PhD student studying visual perception.

"Not to mention how fun they are to look at," adds senior author Chaz Firestone, who runs the university's Perception & Mind Lab.

JHU's Perception and Mind Lab created visual anagrams that are two pictures in one

Image credit: Khamar Hopkins / Johns Hopkins University

The team adapted a new AI tool to create "visual anagrams." An anagram is a word that spells something else when its letters are rearranged. Visual anagrams are images that look like something else when rotated. The visual anagrams the team created include a single image that is both a bear and a butterfly, another that is an elephant and a rabbit, and a third that is both a duck and a horse.

"This is an important new kind of image for our field," said Firestone. "If something looks like a butterfly in one orientation and a bear in another—but it's made of the exact same pixels in both cases—then we can study how people perceive aspects of images in a way that hasn't really been possible before."

The findings are published today in Current Biology.

A black and white visual anagram of a mouse and a cow in gif form

Image credit: Johns Hopkins University

The team ran initial experiments exploring how people perceive the real-world size of objects. Real-world size has posed a longstanding puzzle for perception scientists, because one can never be certain if subjects are reacting to an object's size or to some other more subtle visual property like an object's shape, color or fuzziness.

"Let's say we want to know how the brain responds to the size of an object. Past research shows that big things get processed in a different brain region than small things. But if we show people two objects that differ in how big they are—say, a butterfly and a bear—those objects are also going to differ in lots of other ways: their shape, their texture, how bright or colorful they are, and so on," Firestone explained. "That makes it hard to know what's really driving the brain's response. Are people reacting to the fact that bears are big and butterflies are small, or is it that bears are rounder or furrier? The field has really struggled to address this issue."

A black and white visual anagram of a rabbit and an elephant in gif form

Image credit: Johns Hopkins University

With the visual anagrams, the team found evidence for many classic real-world size effects, even when the large and small objects used in their studies were just rotated versions of the same image.

For example, previous work has found that people find images more aesthetically pleasing when they are depicted in ways that match their real-world size—preferring, say, pictures of bears to be bigger than pictures of butterflies. Boger and Firestone found that this was also true for visual anagrams: When subjects adjusted the bear image to be its ideal size, they made it bigger than when they adjusted the butterfly image to be its ideal size—even though the butterfly and the bear are the very same image in different orientations.

A black and white visual anagram of a duck and a horse in gif form

Image credit: Johns Hopkins University

The team hopes to use visual anagrams to study how people respond to animate and inanimate objects and expects the technique to have many possible uses for future experiments in psychology and neuroscience.

"We used anagrams to study size, but you could use them for just about anything," Firestone said. "Animate and inanimate objects are processed in different areas of the brain too so you could make anagrams that look like a truck in one orientation but a dog in another orientation. The approach is quite general, and we can foresee researchers using it for many different purposes."

The work was supported by the National Science Foundation Graduate Research Fellowship Program.