Skip to main content
A smiling headshot of Juan M. Lavista Ferres

Credit: Juan M. Lavista Ferres

Q&A

Finding the virtue in AI

When the news broke of wildfires raging across Los Angeles in January, Juan M. Lavista Ferres picked up the phone. Wanting to be a part of the aid response, the chief data scientist at Microsoft's AI for Good Lab rang up Planet Labs, a satellite imaging company, and proposed a collaboration.

Together, the two companies mapped which houses and structures would likely be affected by the fires and then worked with the American Red Cross and other partners on the ground.

If not for machine-learning technology, conducting such a detailed assessment of the disaster would have taken weeks and dozens of analysts. But by using machine-learning technology, Ferres says, the team was able to analyze 150,000 LA homes in just a few hours.

Ferres, Engr '06 (MS), has for decades been an advocate of using artificial intelligence to address pressing global issues. His 2024 book, AI For Good: Applications in Sustainability, Humanitarian Action, and Health, curates an impressive range of studies completed by the Microsoft lab researchers, such as devising data-driven strategies to address wildlife conflict and identifying areas around the world lacking broadband Wi-Fi access.

Johns Hopkins Magazine spoke to Ferres about working for Microsoft and the potential for AI applications to solve some of the world's most pressing issues.

Some people are concerned about the rise of AI technology. Why should we be optimistic about AI's promise?

With any technology attracting this much hype, there are people who will be skeptical about some of the use cases. But there's a huge amount of power in AI. There are problems out there where AI is not just a solution, it's the only solution.

"There are problems out there where AI is not just a solution, it's the only solution."
Juan M. Lavista Ferres

One of the first projects I worked on was studying the reasons behind infant mortality and sudden infant death syndrome. We collaborated with Seattle Children's Hospital and took AI models of CDC datasets to learn how maternal smoking is one of the causes. We found that if a mother smokes even one cigarette a day, it could double the chances of her baby having SIDS.

This was an important issue to me because before I took on this project, I found out a close friend of mine lost his child to SIDS.

What are some of your more recent projects where you and your team have leveraged AI technologies to bring about positive change?

Aside from the Planet Labs collaboration for the American Red Cross, we are launching a biodiversity project called Sparrow. Devices would be placed in remote areas, on the forest floor, for example, and the cameras in these devices would record any animal life. It would include a small graphics processing unit that uses AI to analyze animal movements and then upload that data via satellite. We think this will change how biodiversity conservationists collect data.

You also worked on using AI to better understand how to detect cancer. What did you discover?

Most cancers had a 20% survival rate around four decades ago, but now the rate of surviving five-plus years is 80%. Pancreatic cancer is the exception. While there have been improvements, the overall five-year survival rate for pancreatic cancer is approximately 13%.

We partnered with Johns Hopkins Hospital to look closely at pancreatic cancer. Doctors tell us that if they find pancreatic cancer lesions when they are less than 2 centimeters, which is quite small, the survival rates are higher. So we have been working with doctors and specialists—some of the best in the world actually—to bring AI technology to medical imaging to ensure they don't miss finding the smallest of these lesions.

Given all your optimism about AI, are there any concerns or shortcomings? What can't AI do yet?

It has transformed industries, but it still has significant limitations, including a lack of true understanding, common sense reasoning, and independent thought. Bias, transparency, and ethical concerns remain major challenges, especially in high-stakes areas like health care and security.

"While AI can process vast amounts of data, it still requires human oversight to ensure responsible use. The future of AI depends on balancing innovation with governance to maximize benefits while mitigating risks."
Juan M. Lavista Ferres

Overreliance on AI can reduce people's incentives to learn, think critically, and develop essential skills, which could have long-term societal consequences. While AI can process vast amounts of data, it still requires human oversight to ensure responsible use. The future of AI depends on balancing innovation with governance to maximize benefits while mitigating risks.

Do you believe K-12 schools should teach all students about the pros and cons of AI?

AI is no longer just something for computer science courses. This is something that every student needs to learn.

There should be lessons not just on coding but also how to use AI technology because it's only going to solve more problems as it matures. These kids can learn how to code if you give them a chance, too. I should know; I taught myself to code as an 8-year-old when my parents bought me my first computer. I also volunteer to teach coding at Washington's Global Ideas School.

What do you find fulfilling about the work you accomplish at Microsoft?

Seeing technology make a real difference. From mapping power grids in refugee camps to tracking biodiversity in remote areas, AI has the power to tackle big challenges and help underserved communities. Working with amazing partners to create solutions that protect lives and promote equity is incredibly rewarding—and it's what keeps me inspired every day.