Workers and consumers are awash in chemicals every day. The products we use to clean ourselves and our surroundings, the food we eat, their containers, the buildings we live and work in, and every manufactured product we touch, all have the potential to expose us to industrial chemicals.
In 2016, Congress made major revisions to the 1976 Toxic Substances Control Act, or TSCA, the main law regulating these chemicals. Among the new requirements, collectively called the Lautenberg Act, was the stipulation that the Environmental Protection Agency must evaluate 10 chemicals per year to assess their risks to human health.
Researchers at Johns Hopkins wanted to see how the EPA's approach to evaluating the health risks of these chemicals stacked up against accepted best-practice in risk science. Keeve Nachman, associate professor in Johns Hopkins' Department of Environmental Health and Engineering, and his team compared the first set of EPA risk evaluations to guidelines on how to conduct risk evaluations set forth by the National Academies of Science, Engineering and Medicine, or NASEM, which are widely considered to be the gold standard for chemical risk assessment.
"There are a lot of key principles and philosophies about how risk assessments should be conducted, and NASEM is the most credible body on that front," Nachman says. "Our review of the first set of TSCA risk evaluations found substantial deviations from these best practices."
The results, published in Environmental Health Perspectives, show that the EPA's approach to assessing risks for these chemicals fell far short in many areas, including literature review, problem formulations and scopes, population variability, background exposures, combined exposures, and cumulative risk, among others.
"The goal of TSCA is to evaluate uses of chemicals that may pose risks to public health and try and eliminate those uses. If we don't apply the best and most rigorous scientific approaches to evidence evaluation and risk evaluation, we may make faulty decisions about the true public health risks incurred by populations, and we may make the wrong choices," Nachman says. "If uses of a chemical are too dangerous, the EPA has the power to disallow those uses under TSCA. That's why these evaluations matter so much."
The Hub talked to Nachman about his research and the implications for consumers and workers.
In general, what is the best practice for risk evaluation?
First, we try to draw conclusions about whether exposure to the chemical has health effects. We look at studies in animals, we look at epidemiologic studies in humans, and sometimes we look at mechanistic information like studies in cell cultures and even computer models.
Second, we try to determine the relationship between exposure and those health effects, meaning a quantitative, dose-response relationship. So how much of it do we need to be exposed to before there is a considerable amount of risk? We're trying to find the most sensitive effect, which means we are looking for the first negative health effect to occur as the dose increases.
Once we've done that, we need to map out the different ways our population can have contact with the chemical. Then, we quantify the amount of the chemical we breathe, consume, or get on our skin. We combine that information from our understanding of the dose response relationship to assess the associated risks and health burdens faced by people that are exposed.
What NASEM provides is guidance on how to make those judgments. It's not a cookbook, but it's key principles. In our paper, we found areas where the guidance was not heeded or was interpreted differently.
What kind of chemicals are we talking about and what settings are they used in?
One example is trichloroethylene, or TCE, a solvent that's been used for all sorts of things. A long time ago, TCE was used to decaffeinate coffee and clean machinery. It was also used as a weed killer. It's everywhere, and it's still used as a solvent. That was one risk evaluation that was extremely contentious.
When the EPA scopes the task of risk evaluations, they need to consider the people who are uniquely vulnerable to or more exposed to that chemical, like workers and people who live near contaminated sites. When they looked at the populations that are exposed to TCE, these groups were left out or inadequately considered.
We are not only worried about people who are more exposed, but about people who are more vulnerable to the same exposures. For example, people with co-occurring health conditions, pregnant women, and developing fetuses would not necessarily be more exposed, but exposures might be more dangerous to them than the average person. In some of the assessments, they did look at these populations, but in some important ones, like TCE, they did not.
10 chemicals per year seems inconsequential considering the staggering number of chemicals in use today. Is it enough?
Even though we're not able to move as quickly as we'd like, it's still important to take advantage of the opportunity afforded by the TSCA requirements. Part of the process of acting on chemicals and changing the way chemicals are allowed to be used is doing these risk evaluations to figure out the extent to which the population is exposed and how that relates to some sort of health burden.
What was the most surprising discrepancy that you found?
One of the stages of these risk evaluations is looking carefully at the evidence, the animal evidence, the mechanistic evidence, and making decisions about the most important adverse health outcomes associated with exposure. Our field has evolved tremendously over the last 10 years in its ability to evaluate evidence objectively and rigorously. In the past, literature reviews weren't as rigorous. Bias played a big role in what studies were chosen and moved forward to develop dose-response relationships. And that has an impact on the assessment. The movement toward systematic review and more formalized evidence evaluation has made huge waves in objectivity and removed much of the biases that may influence conclusions about risks.
But one area where the EPA is falling short, based on our review, is in the implementation of systematic methods. They attempted to use systematic methods and to consider flaws in individual studies, but I don't think they did that particularly well, and we're not the only ones to criticize them for that. The National Academies have directly criticized them for their approach to systematic review.
Why do you think the EPA deviated from best practice in their risk assessments?
I'll just say, there are good scientific principles that exist, we found instances where the EPA didn't follow them, and we pointed them out. Hopefully, future risk evaluations will take these and other comments under consideration and better reflect the best practices in our field.
What do you hope comes out of the study and what changes would you like to see?
We're certainly not the only researchers and advocacy organizations that are looking at this. I'm proud of our distillation of the key problems. And I'm proud that we were able to point to best practices to solve a lot of the problems. But it's tough to know what's going to happen. I'm hopeful, with the current administration, that we could see changes. But I really don't know.
Posted in Health, Science+Technology, Voices+Opinion