Barry Gordon is a voluble man on the subject of the human brain. There is a volubility even in the inventory of his professional disciplines: neurology, cognitive neuroscience, neuropsychology, behavioral neurology, experimental psychology. (Formally, he is a professor of neurology at the School of Medicine and professor of cognitive science at the Krieger School of Arts and Sciences.) Ask him about the brain, which he has been studying at Johns Hopkins since the 1970s, and he can riff for 20 minutes about what is known and how much is not known, and the oddities of memory and the conservation of brain biochemistry across eons of evolution and questions that no one can answer despite centuries of pondering. How do we learn to pronounce words? Good question. Why are most of us, once past age 5, unable to learn to pronounce a foreign language like a native speaker? Another good question. The typical human brain has associative skills that no computer can match. You can look at a bucket and a shot glass and despite their differences your brain immediately associates them as vessels that hold liquid. How does it do that? Why did it take Gordon's own autistic son years to learn he had to close his mouth when swimming so as not to choke, yet he can remember and place a license plate number he glimpsed only once three years ago? "My science should be able to explain this," Gordon says. "But it doesn't."
Late in 2011, Gordon was brought into a small network of people who for a few years had been discussing the ways and means of supporting and expanding mind/brain science at Johns Hopkins. The ways part of the conversation centered on acknowledging that while Hopkins was strong across several disciplines—cognitive science, neuroscience, neurology, psychology, and more—those disciplines did not interact as well as they might. The means part concerned the "Rising to the Challenge" fundraising campaign, which had begun its quiet phase in January 2010 and was to be publicly launched in 2013. The institution's development strategists were assembling a set of signature campaign initiatives meant to attract major gifts for work on some of humanity's biggest problems. Fundamental to that effort would be bringing together research and scholarly disciplines scattered across Johns Hopkins' divisions, institutes, and centers to create initiatives that would capture the imagination of donors.
The brain science conversation initially focused on neuroscience. At some point—the participants are fuzzy on precisely when—the discussion shifted. Barbara Landau, a professor in the Krieger School of Arts and Sciences' Department of Cognitive Science and vice provost for faculty affairs, recalls, "Somewhere along the line somebody came up with the idea that what could be a focus of all these different efforts was learning, with an emphasis on the science of learning." That paradigm shift created the opportunity to pull in other parts of the university, including education, engineering, and humanities, for creation of a major new research collaboration. When an anonymous donor came through with a major gift, the Johns Hopkins Science of Learning Institute was born with Landau as its director.
The institute's creation could be regarded as formal recognition of a paradox. Many life forms learn. Some of them learn well. But they do not approach the ability of humans to learn. For all that ability, though, after millennia of study and research and thought, we still do not have a complete understanding of how we learn. One could go deeper and say that we are in the process of learning how to learn about learning. Were you to imagine a pile of everything known about cognition, memory, reasoning, brain chemistry, and all the other ways the brain works, it would be a large pile. We do know a lot. But do we understand how one day a child learns that the string of letters on a page of her favorite book means "Hop on Pop"? Do we know what happens in the brain when someone learns that depressing the right set of keys on a piano produces an E major chord, or a law student memorizes enough law to pass the bar exam, or a stroke victim learns to speak again? Do we know how best to teach whatever needs to be learned by schoolchildren and college students and adult workers in need of retraining for a new economy? No. Most of our answers to these and similar questions are partial, at best.
One of Gordon's riffs concerns how the basic biochemical building blocks of the human brain have been around for hundreds of millions of years, which to him makes the brain an assemblage of very old technology: "So what is it about the human brain and our culture that takes this ancient hardware and just 18 years or less of education and produces, somehow, a generalized learning machine that seems so much bigger than the sum of its parts?"
Learning is not an elective. "We are forced to learn," Gordon says. "We're born helpless. We stay helpless for a long time. We don't consider our children ready to go into the world until they're 18. It takes a lifetime of learning to learn how to live."
That lifetime of learning is a lifetime of accumulating memories. Richard Huganir became fascinated by memory before he was one of those 18-year-olds deemed ready to go into the world. "When I got to high school, it occurred to me that what made me unique was memory and my experiences," he says. "That's what makes us who we are." He still has a notebook he compiled for a high school science project in 1971. On its pages are notes about memory and protein synthesis. He was trying to recreate an experiment he read about in Scientific American that involved training goldfish. "Didn't quite work out, but it was fun," Huganir recalls. "Goldfish actually learn pretty well."
Forty-two years later as director of the School of Medicine's Neuroscience Department, Huganir is still studying memory, though mice have replaced the goldfish. He now concentrates on the physiology of the brain's synapses, the structures that enable transmission of nerve impulses between neurons. One theory of learning maintains that when you learn something, your brain has either created a new synaptic circuit or strengthened an existing one. The ability to recall what you have learned is actually the reactivation of that chemical-electrical circuit. To figure out how that happens, the researchers in Huganir's lab work at the cellular and molecular level. "It's an interesting biological problem," he says. "A memory has to be physically encoded in your brain and has to be stable for decades. But the brain is not a computer circuit. It's not hard-wired. It's made of proteins and lipids, and all these things turn over in the brain and get degraded. If a memory is really the synaptic circuit, how do you maintain that circuit within this very mushy, dynamic brain?"
Another aspect of memory that fascinates Huganir is the role of emotion. We learn, sometimes for life, from emotional events because something about emotion makes for vivid memories. Huganir notes that such events release neuromodulators in the brain, chemicals like norepinephrine and dopamine that seem to tap into memory circuits and strengthen them. But memories, powerful as they may be, are not as locked into our brains as once thought, at least not immediately. One of Huganir's most startling findings is that traumatic memories can be erased, or at least the fear associated with them can be erased. "What this means, basically, is that you can edit your memory," he says. "So if you have a memory event, the memory is not that stable. You can recall it and modify it at later dates." But there appears to be a deadline for this sort of revision. If a memory that generates fear is not "edited" within about a week, Huganir says, it becomes locked into our brains as traumatic. He has been working on extending the deadline and thinks his research team may have found a pathway into traumatic memory that could permit erasing it as much as a year after its formation. He is working with Barbara Slusher at the Brain Science Institute to develop a compound that will help tap into this pathway. In the Science of Learning Institute, he is working on the science of unlearning.
A science of relearning is practiced by Michael McCloskey, a professor of cognitive science in the Krieger School. McCloskey studies people with unusual impairments that have left them unable to do something they learned to do as children. He has been working with a small group of people who have suffered brain impairments that have left them unable to recognize letters or digits. Nothing is wrong with their memories—they did not forget letters and numbers. But when they look at an A or a Q or a 9, they simply cannot recognize it. They can still speak, they can still write, and their vision is otherwise normal. But something has happened that has left them unable to look at an X or an 8 and see them.
McCloskey has been working closely with a girl who has an abnormal tangle of blood vessels in her brain known as an arteriovenous malformation, which he calls "a big tangled mess that's not supposed to be there." The blood vessels of an AVM are prone to leaking, and when this girl was 11, her malformation hemorrhaged. Now 13, she has come back to nearly normal motor functions and speech and cognitive abilities, but for one baffling thing—when she looks at letters or digits, all she sees are blurred, unrecognizable shapes. Display a drawing of a peanut and she sees it perfectly. Put the peanut inside a 4, or even merely close to a 4, and it becomes an indistinct blob. Show her one circle placed close above another, and she sees them and recognizes that they suggest the numeral 8. Bring the circles together and as soon as they touch they become indistinguishable blurs. "It's stunningly specific," McCloskey says. Another of McCloskey's subjects is a 60-year-old engineering geologist with a neurodegenerative disease who can read just fine. But display a digit and he cannot see it. He has lost none of the ability with mathematics essential to his work, and he can read Roman numerals and number words like "thirty" or "nine," but he cannot distinguish one Arabic digit from another.
McCloskey says, "These are clearly learning-related problems. If we give any of them a random shape of some kind, they can see it perfectly well. Stored knowledge or processes that have been acquired are getting activated when they look at these things, but then something goes terribly wrong."
For the engineering geologist, McCloskey and his team devised a new system of marks that represent the values 0 through 9. These the geologist can not only see but use mathematically to continue his engineering work. This raises interesting questions about learning, McCloskey says. For example, when the man was a child too young to have learned the concepts represented by numbers, he still could see them. After his stroke, he could not see them because they were numbers, which means the problem stemmed from the fact that earlier he had learned what numbers are. If he had never learned numbers, they would not now be distorted beyond recognition. How do you explain that? And how can you explain that by learning a new system of marks, he could see "numbers" again? Good question.
In experiments with the 13-year-old who cannot see letters, he found that when he modified the forms of letters in certain ways, she began to recognize them. He recalls, "Someone said, 'Well, why don't you just put lines through the text?' And I thought, 'Oh, that won't work.'" He laughs about that now, because when they took a few sentences of text on a computer screen and changed the font so as to cross every letter with a double line—the double strikethrough often used in legal documents—suddenly the girl could read, no problem. A single line does not work. It has to be double. The lines also have to be in the right place through the letters. Too high or too low and they do not work. "Do we know why?" McCloskey says. "No, not really."
One way to understand the dimension of the challenge faced by the science of learning is to consider the difficulty of simply being clear on what counts as learning. A science needs a clear conception of itself and its subject. Imagine doing biophysics without clarity on what was meant by "biophysics." Steven Gross knows a lot of science, but he is not a scientist. He is an associate professor of philosophy in the Krieger School who pays close attention to the work of cognitive scientists in his study of the philosophy of mind. His role in the Science of Learning Institute is to think about what it really means to have learned something.
He points out that learning is far more complex than a simple matter of yesterday you didn't know how to order wine in French, and today you do. Because you have acquired sufficient vocabulary, grammar, and syntax to procure a bottle of Côte de Beaune, does that mean you have learned at least some French? Seems like a reasonable proposition. Then Gross asks, "So does any kind of acquisition count as learning? What do I mean by 'acquisition'? That there was a time when I didn't have it, and a time when I do have it, so now I've learned it?" Yes, that seems right. OK, Gross responds, then what about this: One day an adolescent boy's voice cracks and he sprouts whiskers on his chin. He has acquired secondary sexual characteristics. Does that mean he has learned them? No, that seems silly. So would the more specific "acquisition of knowledge" work as a definition of learning? Well, what about acquisition of a motor skill? We say our toddler has "learned" how to walk. But it is far from clear that this amounts to "acquired knowledge of walking," unless balance and coordination and muscle memory count as knowledge.
And however we define learning, when have we crossed from "learning" to "learned"? Some types of learning are not a matter of degree. Either you know that three is an odd number and Sacramento is the capital of California, or you do not. But returning to his example of ordering a bottle of wine in a French café, Gross points out that knowing enough of the language to overspend on a bottle of wine does not mean you could teach a graduate seminar at the Sorbonne. So at what point have you learned to speak French? When you know 10,000 French words? When you can discuss Camus in a classroom in Paris? When you dream in French?
Mariale Hardiman has watched all of this activity intently. Hardiman is a co-founder and the director of the School of Education's Neuro-Education Initiative, which was first funded by the Brain Science Institute and created to further the application of brain science to education. "We started it with the goal that teachers should know what the science is and have translations so they can use it in their classrooms. Teachers need to understand research to avoid 'neuro-myths' like the 'Mozart effect'"—the spurious idea that exposing children to classical music could increase their intelligence—"and to be able to parse research to understand it. We need to teach that in teacher preparation programs."
Susan Magsamen, director of interdisciplinary partnerships at Hopkins' Brain Science Institute and another co-founder of the Neuro-Education Initiative, says, "We know a lot about how people learn at optimal levels. But education and the science of learning are so disconnected. We know so much from looking at optimal hours of the day for different kinds of learning, thinking about nutrition, thinking about exercise and learning, thinking about stress and learning. But it has not been rigorously applied." School systems adopt new ideas because they seem good and because other educators are excited about them, not because anybody knows whether or not they work. "That's very different from an evidence-based approach to learning. Many schools of education are still operating like they're in the 1950s."
One example of the evidence-based approach: Hardiman has subjected work of the Neuro-Education Initiative to outcomes research. "We say that teachers should have knowledge of neuro and cognitive science," she says. "But in reality, what difference does that knowledge make in teacher practice? Does it make any difference? Or is it just some thought I and others have that teachers need to know this?" To find out, she and a research team gave four cohorts of teachers about 30 hours of professional development instruction in brain science, and surveyed them before and after the training. The researchers found that the instruction significantly improved teachers' knowledge of brain science and their attitudes toward applying it to the classroom. Of more importance, Hardiman says, was their finding that after learning about the science, more teachers believed they really could improve the lives of children. "An example of a teacher who does not have a belief in efficacy of teaching is one who is teaching children in poverty and says, 'You know what? There's only so much we can do.' We have found that the information we're giving about neuro and cognitive science has significantly improved teachers' beliefs in their own efficacy and the general efficacy of teaching." The group most affected, she says, was novice teachers, which is important because research shows most teachers leave the profession in the first five years. "If we can use science to help teachers believe more in the power of teaching, that's a huge contribution to the field of education."
In her research, Hardiman has tested use of the arts in classroom lessons to improve retention of information: for example, having students in history class draw pictures, or put historical details into songs, or create imaginary personal letters from the people in the lesson. She and some of her postdoctoral fellows studied 100 children in a Baltimore city school; the experimental group received instruction with arts integration while the control group got the same lesson content but without the art. The results were interesting. Tested shortly after the lessons, the arts group showed no better retention than the control group. But when the researchers tested the kids again three months later, they found those who had been taught content through the arts had retained significantly more than those taught through conventional instruction. Hardiman has a grant from the U.S. Department of Education to study arts integration in 16 more classrooms this autumn.
"We're learning a lot about memory at the molecular level," Hardiman says. "We know biologically something is happening. Now, do teachers need to know those biological mechanisms? No. But do they need to know that something is happening in a child's brain when she remembers information, and that they can change that biology if they use certain techniques? That is what they need to know."
The Science of Learning Institute's website currently links to 64 researchers. A glance at their projects suggests the scope of the institute's disciplinary diversity. Randal Burns, an associate professor of computer science in the Whiting School of Engineering, is working on systems to help reverse engineer the neurophysiology of mammalian brains. Donald Geman, a professor of applied mathematics, works on statistical learning. Biomedical engineer Reza Shadmehr works on human movement control. Jacob Vogelstein at the Applied Physics Laboratory works on neuromorphic engineering—development of hardware systems that emulate the brain. Lisa Feigenson and Justin Halberda in Psychological and Brain Sciences study how infants learn. David Andrews, dean of the School of Education, researches individualized learning. Soojin Park, another cognitive scientist in the Krieger School, works on how the brain learns to recognize scenes. "I know when I walk into a room that it is an office, not a kitchen, and I know it is a familiar place that I've been," she says. "These are things that we do automatically. I'm trying to find out how we do it. It's actually an amazing ability." She notes that seconds after she wakes up in the morning, her brain has perfectly oriented her. How?
The institute does not yet have plans for a dedicated building, though several of those involved in its creation point out the virtues of putting people from various disciplines in one place. Brenda Rapp, a professor of cognitive science in the Krieger School, says, "The physical distances between people at this university are an obstacle to collaboration. Just having to go across the breezeway is an obstacle. It reduces those sorts of encounters you have when you're just getting coffee, which can be valuable as a basis to develop things. Brainstorming is a powerful mechanism. Sometimes you just need to work things out alone, but I see it all the time: I feel like I've really thought about something and I have some ideas, and then I go into my lab and start talking to a couple of people and we'll make so much more progress. Because we're not physically together, I think it's less likely for that spontaneous thing to happen."
The institute has begun making research grants. So far it has funded seven projects whose investigators work in an array of disciplines that exemplify the scope of its inquiry: neuroscience, neurology, psychology, computer science, cognitive science, biomedical engineering, mechanical engineering, applied mathematics, education, radiology, surgery, and biostatistics. The grants cover two years and have titles like "The Role of Astrocytes in Reward-based Learning," "Cognitive, Neural, and Translational Implications of a New Reading Disorder," and "Defining the Genetic Basis for Individual Differences in Learning." Next January, the institute will host a two-day symposium to discuss issues such as critical periods for learning and math anxiety in the classroom.
If the institute does a better job of figuring out how the brain learns, perhaps it will be in part because it has mimicked the brain as an assemblage of distinct but interconnected constituent parts, each part performing a different function but contributing to a collective storehouse of knowledge, with new connections forming day by day. "Science proceeds by reductionism because that has been proven to be a useful approach," Barry Gordon says. "But reductionism has an end point at which it may not work, in which case you need to look at a different frame, or a broader context. Rather than have a whole bunch of individuals hacking at the problem separately, the institute gives us a chance to look at the bigger picture. It's an attempt to apply the reductionist framework with error correction, because your colleagues will tell you when you're doing it wrong and say, 'Let's try another way.'"
When talking about learning, Gordon brings up the work of Nobel laureate Eric Kandel. Kandel won the Nobel for his research on the neural system of Aplysia, a type of sea slug. To the extent that Aplysia can be said to think and learn, it does so with neurons and biochemistry similar to what is found in the human brain. "Sea slugs and us separated a long, long time ago in evolution, and no one suspected that the basic biochemistry had been conserved across so many species and gaps in biologic time," Gordon says. "That implies to me that the magic of how we're different, which includes how we learn, has to be in how everything is put together, not in the parts themselves being so much better. Sure, there has been some improvement in the hardware, but is that enough to explain why Aplysia is not conscious and learning? Why aren't there schools full of Aplysia kids?"