Business leaders and researchers discussed the opportunities and difficulties of implementing artificial intelligence solutions in health care in a virtual event jointly hosted by the Harvard Business School and the Harvard T.H. Chan School of Public Health.

The panel was moderated by physician Trishan Panch and included technology executives Javier Tordable and Ben Zeskind and health care researchers Heather Mattie and Leo Anthony Celi.

In discussing applications of artificial intelligence in health care, Zeskin said using artificial intelligence could reveal counterintuitive insights compared to traditional models of diagnosis.

“For all the medical progress that there’s been, it’s still the case that millions of people die of cancer every year. If we keep doing the intuitive thing, we’re going to keep getting the same results. So that’s why counterintuitive insights are so important,” Zeskin said. “I think that’s kind of the beauty of AI and computation.”

Tordable said that there is a “black box” with machine learning models in practice as health care providers may lack knowledge about their mechanisms and be hesitant to use them.

However, Tordable added that a potential shift in attitudes toward these models and their use in medicine may be coming.

“At some point, [the health care system] may be using critical decision support systems that are not based on rules, right? They’re based on machine learning models for which we may not understand exactly how they work, but they may work significantly better than a human,” he said.

Contrary to other panelists, Celi said there would not be “any significant advances in AI in healthcare,” citing the disparities in real world data and the lack of vulnerable perspectives in building algorithms.

“The purpose of scientific advancement is to improve population health, and the cohort of the people who carry the biggest burden of disease, I don’t think, is going to be impacted by AI,” Celi said. “For that reason, the scientific advancements are irrelevant — are useless.”

Tordable said he agreed these disparities would become a “problem.”

“We have a situation where a few technology companies are well-funded institutions that have access to this type of technology and can spend the budgets and the compute power and have the budget to hire the people that can do these kinds of things, ” Tordable said.

In response to audience concern about whether artificial intelligence insights may reflect algorithmic biases rather than reality, Mattie said artificial intelligence “does reflect the world that’s being used to train it.”

Mattie added that she was “excited” about getting to work on curbing potential biases.

Zeskin said that a challenge with the perception of artificial intelligence is that there tends to be more “hype that’s been generated” than the truthful outcomes of AI.

“I think if people portray that exciting future as an exciting future, then it is totally fine,” he said. “It’s when people portray something as being here now that’s not really here now, I think that’s where it starts to get a little confusing for people.”

Tordable said that because machine learning was not “perfect,” more work has to be done before artificial intelligence becomes a mainstream part of health care.

“I think the best that we can do is to make sure that the process used to build those systems is appropriate, that there is enough data, enough variety for the population that it’s going to affect,” he said.

—Staff writer Paul E. Alexis can be reached at [email protected]

—Staff writer Krishi Kishore can be reached at [email protected].