A little girl in a pink dress sits at a table at the Johns Hopkins Laboratory for Child Development. She is just shy of 3 years old. "Can you count these?" a researcher across from her asks, indicating a picture of six apples. "One, two, three, four, five, six," the girl says in a high-pitched voice, pointing confidently at each apple in turn. Then the researcher asks, "So how many apples is that?" The child waves her hands in the air with delight. "Eight!" she cries.
This odd scenario plays out with nearly all 2- and 3-year-olds. It turns out that while kids learn quite early to "count," they are at first simply performing a routine, matching words to objects. It takes longer, until around age 4, for a child to develop a true understanding of cardinal numbers, of quantity.
The phenomenon came to light in the early 1990s, when it occurred to psychologist Karen Wynn, then at the University of Arizona, to ask that apparently obvious follow-up question, "How many?" Her study is a striking example of something Johns Hopkins Laboratory for Child Development co-directors Lisa Feigenson and Justin Halberda take as a guiding principle: We know less about children than we think we do, and expanding our knowledge is often a matter of how one asks the question.
"[People] often have the gut impulse that studying child development and [studying] how children think about the world are sort of self-evident," Feigenson says. "'Well, isn't it obvious? Don't you just look and see what they are doing?' No, it's not at all obvious. There are many, many cases where if you look deeper, what we think on the surface—our first guess—is totally wrong."
"It's reassuring to me as a scientist when the answer is the opposite of what I expected," Halberda adds. "It says, 'Hey! Doing science is important. You can't just, like, make it up.'"
Both associate professors in the Krieger School of Arts and Sciences' Department of Psychological and Brain Sciences, the pair have separate spheres of research. Feigenson works primarily with babies, studying memory development and infant learning. Halberda studies older children and adults, focusing on language acquisition and how we construct mental representations of the world. But they also conduct research as a team on numerical abilities. About a thousand children a year pass through the lab, and many come back for other studies throughout their childhood.
While sometimes we attribute more knowledge to children than they actually have—as in the case of the counting toddler—just as often we underestimate them, Feigenson says. The lab has made some astounding discoveries about children's capabilities. For example, a new study now in peer review has found that children just over a year old can not only add and subtract approximate quantities, as previous research had shown, but also solve for x. That is, do algebra.
Results like these tend to provoke skepticism, even from fellow child development researchers. But Feigenson and Halberda are confident that the lab's conclusions about infant knowledge are as valid as results involving older children, or adults for that matter. It comes down, once more, to that guiding principle: How one asks the question is key, even when the subject lacks teeth and bowel control, let alone the ability to respond in words.
For thousands of years, the mind of the child—particularly the infant—was considered fundamentally unknowable. That, of course, didn't prevent philosophers from speculating. Plato believed that babies were born with innate knowledge, while Aristotle thought their minds were essentially blank slates. Charles Darwin kept one of the first observational journals of infancy, an account of his own son's development. ("April 16th, [1839]. Was exceedingly amused by his pinafore being put over his face & then withdrawn." Science discovers peekaboo.) Swiss psychologist Jean Piaget also based much of his work on observations of his own children, beginning in the 1920s. One of the first thinkers to take an empirical approach to studying development, Piaget believed that human beings construct knowledge by encountering new information and squaring it with their existing understanding of the world, which at birth, according to Piaget, consists of next to nothing.
Child development became a subject of serious study in the 20th century, but it was not until the 1960s that psychologists devised a systematic method to study infants. While babies cannot perform tasks and have limited control of their limbs, researchers noticed that they had control of their eyes at birth. With this in mind, developmental psychologist Robert Fantz discovered that infants tended to look longer at patterned images than solid ones, indicating that they distinguished between them and preferred one over the other. Over time, his observations evolved into the "preferential looking" paradigm, based on the premise that babies will pay attention to anything new and interesting. Researchers could "habituate" a baby to a given stimulus and then introduce something new to test the baby's ability to differentiate. Initially, researchers used the method to learn about infants' perceptual capacities: Could they see color? (Yes, but poorly until around 3 months.) At what age did they recognize their mother's face? (At birth.)
In the 1980s, cognitive psychologist Elizabeth Spelke—Feigenson's undergraduate adviser at Cornell—was one of the first to extend the use of preferential looking beyond perceptual questions to cognitive ones. She helped develop the "violation of expectation" experimental method: Babies were presented with physically impossible events—in essence, magic shows—to ask questions about whether they had innate expectations about the world. She found that, as young as 2 and a half months, infants tended to look longer at impossible events, like a ball rolling through a solid wall. From such results, Spelke and others concluded that babies did indeed have a core body of knowledge. The looking-time measure revolutionized the field of infant cognition. Thousands of studies—on topics ranging from whether babies understand gravity to their knowledge of rules of social interaction—have since relied on it, including many conducted at the Laboratory for Child Development.
But the fundamental disagreement that Plato and Aristotle had about the mind of the infant has endured. Those in Spelke's camp, like Feigenson and Halberda, see the discovery of looking time as a watershed moment in the history of psychology. "People have been debating the fundamental issues about what it is to be human for thousands of years," Feigenson says. "What do we come into the world with? How much of our mind is acquired through experience and effort? Insights that allow us to test babies and find out what's in the mind of a baby before much experience has accumulated—we've only been able to do that for 50 years. It's incredibly inspiring." Feigenson and Halberda—along with many of their colleagues—believe that Piaget was wrong: Babies are not born devoid of knowledge. In fact, their studies have shown that babies come pre-equipped with a rather sophisticated body of knowledge in some cases, like the ability to do basic math.
Many developmental psychologists, however, are skeptical of such "super baby" studies, as critics have dubbed them. Marshall Haith, a psychologist at the University of Denver, has written that researchers who rely on looking time to draw conclusions about infant cognition are committing "psychological felonies" and contributing to a "theoretical muddle" in the field of child development. In basic terms, Haith and other skeptics contend that there are other explanations for why an infant might look longer in a given study, perceptual reasons that have nothing to do with cognition. For example, an infant who looks longer at a ball rolling through a solid wall might simply be reacting to the novelty of the ball being on the other side of the wall, rather than to the physical impossibility of the action.
But Halberda says such a reductionist approach begs the question: If the mind of a baby is initially more or less blank, how is knowledge acquired? As he puts it: "You could never learn how object A supports object B"—for instance, a table supporting a cup—"unless you first understand that object A is separate from object B. If you don't have some fundamental abilities at the get-go, you're not going to be able to learn." Looking-time outcomes have paralleled one another in numerous domains of knowledge, he and Feigenson say, and studies in adults and newborns are further evidence of the method's power. In studies with adults, looking-time measures mirror verbal ones, and babies just 3 days old look longer at an image of their mother's face than that of a female stranger.
"Looking is just a way of orienting your attention," Halberda says. "From the moment they're out of the womb, the baby will orient toward stimuli that are attention grabbing to the infant. In a way, that's all we want to know from the looking time: Did you notice this?" And, Feigenson and Halberda say, other methods of measuring infant reactions to stimuli—changes in heart rate, blood flow, and electrical activity within the brain, for instance—have provided converging evidence that looking time, simple as it is, is revealing hidden complexities within the infant mind.
Emil, my 9-month-old son, bangs vigorously on a xylophone, oblivious to the contribution he will soon make to science. The waiting room of the Laboratory for Child Development is full of toys; even the interior design is friendly. Rainbows of marbles stud the sconces, and the large windows separating the office from the waiting room are shaped like a triangle, a circle, and a square. Graduate student Aimee Stahl guides us into a small room dominated by a puppet stage. The baby is to take part in a study on whether infants learn more after experiencing surprise. We place him in a highchair facing the stage, and I sit behind him in a corner. A camera embedded in the stage will record his reactions, while another, behind me, records what he is seeing. The curtain rises and a hand—Stahl's—appears.
"Look!" Stahl says with exaggerated animation, wiggling a flat piece of black foam core. Emil kicks his chair and whines. "Watch this!" she says, and a blue foam block with googly eyes and a pink nose descends onto the stage. Emil writhes, craning his neck to see me. Stahl decides he might be more comfortable on my lap and we rearrange. The curtain rises once more, Emil quietly sucks on his fingers, and the puppet show begins in earnest. It's not long on plot. There are two characters, the blue block and a bright green ball with red spots. At one point, the blue block character disappears behind the piece of foam core and reappears behind another, on the other side of the stage, as if by magic. (This is the surprise element of the trial.) Not long after, Stahl attempts to "teach" Emil that the blue block—as opposed to the green ball—makes a rattling sound. He is then shown both of the characters once more, accompanied by the rattling sound. If he has learned his lesson, he ought to look toward the blue block when he hears the rattle.
The whole show takes just six minutes. Despite an in-depth discussion about the study prior to the show, I have no idea how one could draw any conclusions based on Emil's behavior. A few days later, I return to the lab, babyless, to find out.
The coding area looks a bit like the control room of a low-budget television station. Banks of monitors are ranged along one wall, interspersed with piles of VHS tapes and labeled plastic tubs: "Cartoon Logic," "Ball Search," "Box Volume." A panel for controlling video feeds from the testing rooms—there are four—is labeled "Do not touch EQ settings on pain of fiery death." Stahl sits down at one of the monitors to demonstrate how looking time is coded, pulling up a video of subject EL154. A still image of Emil sitting in my lap appears, and Stahl plays the video in slow motion. (Someone who did not witness the puppet show will do the actual coding, to avoid any unconscious bias.) For each tenth-of-a-second frame, she clicks on an option: Left, Distracted, Center, or Right. These correspond to where the baby is looking during that slice of time. She clicks at lightning speed, without hesitation. "I've coded literally thousands of babies at this point," she says. "It's very clear." And in this case at least, it does seem remarkably easy; though the baby's other movements—the fingers in his mouth, his kicking legs—are erratic, the movement of his eyes is easy to track and clearly related to what is happening on the stage. (After analyzing the data, Stahl tells me that Emil did indeed learn that the blue block was associated with the rattling sound.)
The surprise study has not yet been published. Yet it is clear, Feigenson says, that at least under the conditions they've tested thus far, babies and children are better at learning right after their expectations have been violated. That, she suggests, may be why they look longer at surprising events: They are using the event as an opportunity to learn. "One of the outstanding questions is how we can harness that to think about children's learning in other settings," she says. "How broadly does this apply?"
The questions raised by this study are already leading to many new avenues of research, as most of their work does. But the co-directors of the Laboratory for Child Development also derive inspiration from a source closer to home: their children.
Feigenson and Halberda met at New York University, where both received their doctorates in cognitive psychology. They went on to Harvard and, as newlyweds, worked together at a lab in Paris before coming to Johns Hopkins to form the Laboratory for Child Development in 2004. Their two daughters, now 6 and 4, have taken part in nearly all the lab's studies. "They love it," Feigenson says. "They say, 'When can we come to work with you again?'"
The demands of parenting have sometimes provided fodder for research. "We wanted to finish dinner and our 2-year-old needed entertainment," Halberda says of one such occasion. To occupy their restless toddler so they could eat in peace, the couple hid M&Ms throughout the living room. While they ate, their daughter periodically returned to ask for clues to find them. The game led to an ongoing study about the precision of spatial memory in children. Four- and 5-year-olds are tested on how many hiding places in a grid with 36 cubbyholes they can remember at once, and in what configuration. (Preliminary results indicate that children of this age have a good memory for the locations of up to five hidden objects, particularly if they are hidden in a geometrical configuration.) Feigenson keeps a notebook for ideas like these. "Trial and error, but some of them do end up being gems," she says.
Once the question has taken shape, the design stage—how that question is asked—kicks in. One important consideration in designing a study is that the trials be fun. "You have to have a sort of sixth sense for what kids enjoy and what kids can and can't do," Feigenson says. "They're not going to do it because it advances science or because they get $10 afterwards." With very young infants, the studies often involve brightly colored images on a computer monitor, like yellow smiley faces, accompanied by silly noises. Older infants watch puppet shows, and studies with children 3 and older tend to take the form of games: stuffed animal races, finding a hidden prize, matching the image of a face with a voice.
But until recently, the scientists struggled with children between the ages of 1 and 2, who do not care to sit passively in a highchair and watch puppets but also cannot understand complicated verbal instructions. "You're kind of at an impasse," Feigenson says. "So we wanted to try and develop some way of assessing those kids' knowledge in a very natural way. What do kids like doing at that age? One that a lot of parents will recognize is that they're interested in putting things in and taking things out of containers." In collaboration with a former adviser, she and Halberda devised a task in which children search within a box for hidden objects using their hands. Here, that sixth sense about how children see the world allowed Feigenson and Halberda to develop a new way of assessing knowledge, one that has since been used in dozens of studies.
Over time, the lab has developed rules of thumb, some of them unexpected. Babies, it turns out, "fuss out" of a study more often if the researcher is wearing black. Beards can also be a problem. And researchers must avoid jewelry and manicures so as not to draw undue attention to their hands rather than the puppets. Well-meaning parents who nudge a baby to pay attention or encourage a child to choose a particular object are another potential obstacle. Even when all such factors are controlled for, children sometimes cry, refuse to participate, throw up. Feigenson laughs. "Our enterprise involves some complications that are just different from what other scientists encounter," she says.
But designing a specific study is not just a matter of making sure parents are coached and children are having a good time. The study, like any in science, must also answer a given question without introducing unplanned variables that could bias the results. The process thus calls for a blend of creativity and rigor. "It's like being a horse whisperer or something," Halberda says. "You don't know that you have an aptitude for it until you get your hands dirty a bit and try."
Given that even a baby's most basic desires can be difficult to read, it might seem absurd to imagine asking something like whether they can do algebra. But postdoc Melissa Kibbe, in collaboration with Feigenson, has developed a "puppet show" to do just that. In brief: An opaque pitcher always pours the same number of pom-poms—say, six—into a transparent receptacle that already contains some pom-poms. (This quantity varies each time.) The idea is that the infant, by watching how the number of pom-poms in the receptacle changes after the pitcher pours, gradually learns there are six pom-poms in the pitcher, without ever directly seeing them. After a number of trials, the infant grows bored—her looking time decreases—at which point Kibbe suddenly pours a different number of pom-poms into the receptacle. Infants tend to look longer at this event, suggesting they recognize that something is amiss. They have, without saying a word or putting a pencil to paper, solved for x.
The discovery that babies can do algebra may prove too conceptual to make waves outside of academia. But one body of research from the lab has recently made a big splash. MSNBC, The New York Times, and Time, among others, covered a 2011 study led by postdoc Melissa Libertus and co-authored by Feigenson and Halberda. The study concerned the approximate number system—that gut ability that helps us estimate numbers, as when we choose the fastest line at the grocery store by eyeing what's in the carts. Chimpanzees, rats, even guppies have an approximate number system, as do newborn babies. In 2008, Feigenson, Halberda, and another collaborator found that, in teenagers, individual differences in the approximate number system correlated with differences in formal math abilities. And in a 2010 study, Libertus found that infants vary widely in their precision: As young as 6 months, some have a more precise number sense than others. The 2011 study found that there is a link early in life—by age 3—between a child's approximate number system and how well he or she performs in formal mathematics.
"That this primitive thing we all have would be linked to this very rarified, fancy, symbolic human ability is surprising," Feigenson says. But she and Halberda say the popular press and even some in the scientific community have misinterpreted their results. (The Toronto Sun ran one of the more thunderous headlines: "Math ability pre-destined.") "We never said that your math ability is written in your genes," Halberda says. "And actually we don't believe that." A related, still unpublished, study with both identical and fraternal adult twins indicates that individual differences in precision are very likely not genetic, though they clearly arise quite early. An Internet-based study Halberda co-authored recently found that one's approximate number system appears to gradually improve throughout life, peaking late, at about the age of 30. And yet another (unpublished) study out of the lab, involving a simple computer game, indicates that the approximate number system can be improved through training, at least temporarily.
Feigenson and Halberda's approximate number system research, like much of their work, may eventually have practical applications, perhaps influencing the way math is taught. But it is primarily the thrill of discovery that drives them. That, and their obvious affection for children. "I love babies and I love kids," Halberda admits. But Feigenson says that even when we find children cute, it is, in part, because they are mysterious to us.
"Why is it that some people love babies?" she asks. "You see the baby doing something so simple—reaching for a bottle, watching something fall to the floor—and then do it again and again, 20 times in a row. What is driving those behaviors? Unpacking those mundane daily mysteries tells us something fundamental about the human mind."
Posted in Health, Science+Technology
Tagged justin halberda, brain science, psychology, lisa feigenson