Elizabeth Salesky

Credit: Will Kirk / Johns Hopkins University

Johns Hopkins PhD student named Apple Scholar

Elizabeth Salesky is the first graduate student at Johns Hopkins to receive the honor, which provides support for scholars' research and academic travel

Elizabeth Salesky, a third-year computer science PhD student at the Whiting School of Engineering, has been named a 2022 Apple Scholar in AI/ML—artificial intelligence and machine learning.

One of just 15 graduate students at universities around the world to be recognized as Apple Scholars this year, Salesky was selected based on her innovative research, record as a thought leader and collaborator in her field, and unique commitment to take risks and push the envelope in machine learning and AI.

She is affiliated with the school's Center for Language and Speech Processing and is co-advised by research professor Matt Post and Philipp Koehn, professor of computer science.

Salesky's research focuses on machine translation and language representations, including developing methods to improve language technology such as that used by Google Translate.

"We want to make models for language that are more data-efficient and robust: for example, to better handle a spelling mistake," Salesky said. "Another problem we want to address is how to build translation systems that work for more languages, even in situations where there isn't as much labeled data or there may be more varied written forms."

Humans can quickly understand that acomodate is supposed to be written as accommodate or langauge should be spelled as language, which Salesky says is one of the most common misspellings she comes across in her research. Humans are also good at using contextual clues to identify and correct mistakes, even when the mistakes involve other, correctly spelled, but misused, words. For instance, most humans can guess that 'the cat is cut' was most likely supposed to be 'the cat is cute.'

But for computer models, typos can be much harder to identify and resolve; even one spelling error can greatly change the output of even the most sophisticated NLP, natural language processing, tool, Salesky said.

Another challenge is that NLP tools are built to support mostly the English language—yet there are more than 7,000 languages spoken in the world. Given the diversity of spoken and written languages, training NLP models to learn more languages will require new data sets that could be difficult to gather.

Elizabeth Salesky works at a laptop

Image credit: Will Kirk / Johns Hopkins University

"On the internet, people often use slang or spell things in ways that aren't formalized, standard spellings. Imagine if someone is typing in a language that's primarily spoken or dialectal. There may be multiple common ways to spell a word, or multiple versions of the same character in use," Salesky said.

Salesky is working to create NLP tools that will allow machines to translate text and speech in multiple languages more robustly. Largely driven by the fact that humans process text visually, Salesky and her collaborators are developing models that use visual representations of text rather than traditional Unicode-based text representations.

Unicode is a standard encoding system that is used to represent characters from many languages in a consistent way. Instead of representing text based on its Unicode characters, Salesky's team is rendering text into an image containing the text and training their model to translate from the image. This allows visually similar forms such as "a11y" and "ally," or "language" and "langauge," to be modeled more similarly though they have different characters.

Current translation models typically use a fixed vocabulary of characters to words that they can model, with words not in this list marked as "unknown." This makes it a challenge to transfer models to related languages with different scripts, like Hindi and Urdu, where the characters may be unknown to the model. However, since their visual text representation method does not rely on a fixed vocabulary, visual text models could be applied to new languages and scripts without requiring transliteration or normalization, or retraining models from scratch.

"When you are trying to make technology that supports the way people type online or in their everyday life, or that can support more languages, having robust and flexible models is important and our approach may help," Salesky said.

Salesky received a bachelor's degree from Darmouth College and a master's degree from Carnegie Mellon University. Prior to joining Johns Hopkins, she worked at MIT Lincoln Laboratory in the Human Language Technology group.

Each Apple Scholar will receive support for their research and academic travel for two years, internship opportunities, and a two-year mentorship with an Apple researcher. Salesky is the first Johns Hopkins graduate student to receive the recognition.