An emphasis on evidence

In the publish-or-perish world of educational research, the mutually beneficial relationship between journals and researchers has produced a relaxation of the scientific process. Three School of Education faculty members aim to change that.

Masked children in a classroom

Credit: Getty Images

The Johns Hopkins University School of Education is noted for the preeminence it places on evidence-based educational programs and strategies that are proven to help students learn. In most cases, evidence is provided by countless peer-reviewed studies in scientific literature published over many decades.

But this emphasis on evidence also raises a reflexive question, says Matthew Makel, an associate research scientist and one of the newest members in the Johns Hopkins faculty. "What qualifies as evidence?" he asks.

Makel is among a growing group of educational researchers and practitioners, including his Hopkins colleagues Vice Dean Hunter Gehlbach and Jonathan Plucker, the Julian C. Stanley Professor of Talent Development, calling for a higher bar to be applied to academic evidence through a set of research standards known as "open science."

Makel, Gehlbach, Plucker, and their cohort of open science advocates, say that in the publish-or-perish world of educational research, the mutually beneficial symbiosis between the journals and researchers, both of whom are eager to publish, has produced a relaxation of rigor that is the bedrock of the scientific process. This process is not new to the research world. Medicine and the physical sciences have experienced similar soul-searching processes in the past.

Composite image of three men

Image caption: From left: Matthew Makel, Hunter Gehlbach, and Jonathan Plucker

"The desire to publish what's 'sexy' is too strong," says Gehlbach, who guest-edited an issue on open science for the journal Educational Psychologist, published this month. "The journals want to publish, and the researchers want data that is publishable. It leads to the publication of 'illusory results'—findings that appear to have emerged from the standard scientific process, but key choices have been made behind closed doors. Thus, it is hard for practitioners, policymakers, and other researchers to know what to believe anymore."

Gehlbach opened with an article, "From Old School to Open Science: The Implications of New Research Norms for Educational Psychology and Beyond," co-written with a former graduate student of his, Carly Robinson, now a postdoctoral researcher at Brown University.

Against that backdrop, Makel and Plucker recently called attention to one of the consequences of this lack of rigor in an article in the same issue of Educational Psychologist, titled "Replication Is Important for Educational Psychology."

"It's really common sense. If we promote the value of evidence, we must ensure that the evidence has value. Open science provides that confidence."
Hunter Gehlbach
Vice dean, School of Education

Replication—or the ability to reproduce an author's results—is taken for granted in other sciences. Each time there is a notable study published, other researchers in the field try to duplicate the work. In education, however, replication is rare. Makel and Plucker write that slightly more than one in 1,000 (0.13%) of educational studies gets replicated.

"Open science is really about transparency as much as it is about scientific process," Makel says. "If I can't see the data, or know what analyses were run on it, then I can't fully evaluate, confirm, or trust I know exactly what happened in the study. Most importantly, I can't even attempt to replicate the study if I cannot see how it was conducted in the first place."

While the problem and its root causes are easy to point out, Makel, Gehlbach, Plucker, and other champions of open science prefer to put a positive spin on open science by promoting its principles and encouraging those in the field of educational research apply them in their own work and when serving as peer reviewers of others' work.

Makel, for instance, was also a co-author on another published article, "Seven Easy Steps to Open Science." It's a call to action of sorts, laying out the principles of open science, including open access to published work and transparency into the publication process. Data and analysis techniques must likewise be open to post-publication scrutiny, as well.

The third principle recommended in the piece is known as "preregistration," in which—preferably prior to data collection—the researchers are asked to declare what their hypotheses are and how they plan to collect and analyze data. Preregistration heads off the tendency for researchers and journal editor to publish only the most sensational, positive results.

"Through preregistration, predictions and analytic plans get saved in a private, time-stamped document that is made public when the authors choose," Makel says. "Preregistration separates confirmatory from exploratory findings and removes temptation to analyze and reanalyze data in search of a significant, publishable result."

Then, of course, open science requires the aforementioned standards of reproducibility and replication. And, finally, the researchers say, the principles of open science must be ingrained in up-and-coming researchers, so that the principles become a matter of course in future research.

Further consequences of the current environment of closed science are visited upon educational leaders and policymakers who may be making real-world, costly decisions based on supposedly "evidence-based" research. The greatest burden, however, falls to the students, who can experience lasting harm from policies and programs that were intended to help them, but which don't work in practice.

"It's really common sense," says Gehlbach. "If we promote the value of evidence, we must ensure that the evidence has value. Open science provides that confidence."