Artificial intelligence

Johns Hopkins experts advise educators to embrace AI and ChatGPT

Panelists discuss the pros and cons of using generative chatbots like ChatGPT in the classroom, express optimism that AI can ultimately enhance learning

Artificial intelligence (AI) chatbots like ChatGPT can solve math problems, draft computer code, write essays, and create digital art—all in mere seconds. But the knowledge and information spewed by the large language models are not always accurate, making fact-checking a necessity for anyone using it.

Since its launch in November 2022 by OpenAI, ChatGPT has kicked off a flurry of both excitement and concern over its potential to change how students work and learn. Will AI-powered chatbots open doors to new ways of knowledge-building and problem solving? What about plagiarism and cheating? Can schools, educators, and families do anything to prepare?

To answer these and other questions, three experts from Johns Hopkins University came together on Sept. 19 for "Could AI Upend Education?", a virtual event open to the public and part of the Johns Hopkins Briefing Series. The experts included James Diamond, an assistant professor in the School of Education and faculty lead of Digital Age Learning and Educational Technology Programs; Daniel Khashabi, an assistant professor of computer science in the Whiting School of Engineering; and Thomas Rid, a professor of strategic studies in the School of Advanced International Studies and the director of the Alperovitch Institute for Cybersecurity Studies. Lanie Rutkow, vice provost for interdisciplinary initiatives and a professor of health policy and management in the Bloomberg School of Public Health, mediated the conversation.

Here are five takeaways from the discussion:

1. The anxiety is valid

"The sudden introduction of any new technology into an educational setting, especially one as powerful as [a chatbot with AI], rightly raises concerns," Diamond says. "… There are concerns about plagiarism and cheating, … [and] a reduced effort among some learners to solve problems and build their own understandings. There are also real concerns about AI perpetuating existing biases and inaccuracies, as well as privacy concerns about the use of technology."

2. The potential to enhance learning is enormous, but beware of the limitations

"ChatGPT is a superpower … in the classroom, and like power in general, it can either be used for good or for bad," Rid said.

"If we look at human knowledge as an ocean, [then] artificial intelligence and large language models allow us to navigate the deep water more quickly, but as soon as we get close to the ground or shore, the training material in the model is shallow, [and the bot] will start to hallucinate, or make things up. So reliability is a huge problem, and we have to get across to students that they cannot trust the output and have to verify and fact-check."

"[With new and emerging generative AI,] there are some really powerful implications for personalized learning [and] easing work burdens," Diamond said. "There's the potential to foster deeper interest and topics among students. There's also the potential of using [these tools] to create new materials or generate draft materials that learners build off and [use to] explore new ways to be creative."

3. Don't count on detection software yet

"You can [use various programs to] identify to what extent what portions of a particular generation [or, say, essay] have been provided by the [large language] model," Khashabi said. "But none of these are robots. None of them are 100% reliable. There are scenarios under which we can say that with some high degree of confidence something has been generated, but for the next few years, as a technologist, I would say, 'Don't count on those.'"

4. Approach AI with curiosity

"Parents and caretakers can sit next to their kid and explore a technology like ChatGPT … with curiosity, openness, and a sense of wonder, … [so] their kids see these tools as something to explore and use [in an experimental way] to create," Diamond said.

"Educators can have discussions with students about what might compel a learner to cheat. … [They] can start to develop their students' AI literacy to help them understand what the technology is, what it can and cannot do, and what they can do with it."

5. Be proactive

"It really is essential that all stakeholders—parents, students, classroom teachers, school administrators, policymakers—come together and have discussions about how this technology is going to get used," Diamond said. "If we don't do that, then we'll wind up in a situation where we have the technology dictating the terms."