Distinguished scientists and educators gathered at Johns Hopkins University this week to highlight cutting-edge research on human learning, from pioneering neuroscience to novel pedagogical approaches.
The biennial Science of Learning Symposium and the fourth annual Symposium on Excellence in Teaching and Learning in the Sciences drew more than 600 registrants for two days of in-depth lectures from leading experts.
In his keynote addresses both days, JHU Provost Robert C. Lieberman stressed the need for interdisciplinary approaches to understanding lifelong learning. The goals of the joint symposia, he said, were to foster deep and ongoing collaboration among scholars.
Among the more compelling presentations across the two days were those about artificial intelligence and machine learning, talks suggesting that neurological research has applications not only in the health care sector but also in the world of technology.
Alan Yuille, a Johns Hopkins Bloomberg Distinguished Professor of Cognitive Science and Computer Science, discussed deep networks, programs that mimic the architecture of the brain to run algorithms incredibly fast and apply increasingly specified filters to data. He spoke in particular about image recognition software, noting that computers can be trained to recognize certain shapes within an image and can automatically generate captions. The topic is the subject of research at places like Google and other Silicon Valley giants. It is also of particular interest to civil defense organizations for use in surveillance.
However, Yuille said, the technology can be fooled, and relatively easily. To trick a computer into thinking a penguin is a human, for example, simply add a television to the image. The computer has to learn the context of certain images, and because penguins aren't typically near televisions—and humans are—the program misidentifies the shapes. These tricks, called adversarial tests, help improve the software, Yuille said. Because the computer's coding is based on the neural networks in the human brain, it has to learn to recognize shapes and contexts, just like people do.
Jason Eisner, a professor of computer science at Johns Hopkins, explored how artificial intelligence can aid human learning of a foreign language. Computers can produce what is called Macaronic text, an amalgam of two languages in a single sentence that can be adjusted to fit the reader's language skill. For example, the French phrase "nous aurons besoin des gateaux" gradually breaks apart to become "we need-erons les gateaux." For a beginner, it can be entirely translated to "we will need the cakes." Macaronic text could be automatically produced by methods similar to those used by Google Translate.
Rather than building a program as an "adult"—by writing into its code everything an adult would know—Eisner argued that it is better to build the software to be a learner. Smart software grows and adapts as it accumulates experiences—predictive text on a smartphone, for example. For macaronic text software to teach a human reader effectively, it must learn how the human learns, Eisner said. So the machine learning algorithm in this case needs to contain a model of the human learning algorithm.
The first day of the joint symposia featured presentations on the neurological activity in the brain during learning or training. Advanced imaging techniques allow scientists to view learning as it happens in live test subjects. Scientists also lectured on linguistics and the way language and reading skills are learned in children and adults.
Day two of the symposia centered on pedagogy and how to build curricula and classrooms that facilitate learning, especially in the STEM disciplines.
The events were co-sponsored by the Science of Learning Institute and the Gateway Sciences Initiative, a multidimensional program to improve and enrich learning of gateway sciences at Johns Hopkins University.