Building Smarter Pathways: How AI Can Support Precision Oncology Within a Regulatory Framework
Stephen Speicher, MD, MS
Dr Stephen Speicher highlights how evolving AI regulations, clinician collaboration, and transparent, human-centered frameworks are critical to safely integrating AI into oncology care pathways and enhancing decision-making at the point of care.
Please state your name, title, and any relevant clinical experience you’d like to share.
Stephen Speicher, MD, MS: My name is Stephen Speicher. I am trained as a pediatric hematologist and oncologist. I still practice out here in California, and I work currently as the head of clinical oncology and patient safety for all of our point-of-care solutions at Flatiron Health, where we develop an oncology-specific electronic medical record. I also serve as the vice chair of the Electronic Health Record Association's (EHRA’s) artificial intelligence (AI) task force.
How do you see regulatory frameworks evolving to ensure the responsible adoption of AI in health care?
Dr Speicher: The regulatory environment right now in AI and health care is extremely complex. It's ever-evolving. It's probably going to be changing by the time we finish this interview and definitely by the time this is actually published. We have to start by grounding ourselves in why these regulations are important. We have pretty unanimous consensus among stakeholders that some form of regulatory environment is important for AI.
As we think about high-risk areas of where these applications can be used, including health care, there's even more of a focus on where these things can be regulated. The data point that I always go back to and why these regulations are important comes from the American Medical Associations (AMA’s) most recent survey of clinicians. They found that one of the major barriers to adoption amongst clinicians is knowing that there is some sort of regulatory framework that is guiding the safe and high-quality development, deployment, and use of AI. Starting there, we need to understand the why and understand that there needs to be regulatory framework set up specifically for AI as these technologies are being used.
When you think about the evolution of these regulatory frameworks, one thing that's really interesting, as we've seen throughout the course of the last year and a half, is that, like any industry, when regulators were originally looking at AI, there was quite a significant knowledge gap in terms of what they understood about AI, how it's going to be used, and what the technology actually could do. The regulators are slowly but surely getting a little bit more sophisticated, gaining a little bit of a better understanding of the actual technology. Therefore, the regulations that we're starting to see are evolving in a way that is a little bit more sophisticated, which we would hope for any kind of developing technology and something that's starting to be a little bit more well-utilized.
We're starting to see quite a few framework trends as we look across the various states that are starting to propose a regulatory statute. There are various degrees of these on a state-to-state basis because we are seeing a lot of the regulations at the state level.
One of the most formative ones has been this idea of “human in the loop” or the ability for human override. When AI is being used, specifically as we think about it in terms of health care, is there a human that's able to override the system if the AI is not behaving in a way that we would expect it to? There are various degrees of trying to understand where the human in the loop principles come in.
One of the other big standards or frameworks that's being talked about ubiquitously across federal regulators and state regulators is this idea of transparency standards. How do we make sure that there's significant model transparency when it comes to the developer? How do we understand how the model is being developed and monitored over time to ensure that there's no significant drift. All of those things are extremely important to consumers as they think about regulatory standards. If we find out that these AI tools are not working appropriately, what are the disclosure and reporting capabilities and requirements for an individual that's using them?
There's a whole trend around independent third-party validators, so this idea of assurance labs and who should be validating these tools and many others. We're starting to see quite a few trends as we look across the regulatory framework. That evolution has occurred over the last year or so.
What are the key challenges in aligning AI regulations across different health care stakeholders, including providers, payers, and technology developers?
Dr Speicher: Like I mentioned, there is fairly unanimous understanding that these regulations are coming in some capacity or another. Whether people want it or not is a different story, specifically on the developer front and on the payer front, if they're developing things. The consumers definitely want some form of protection. States are recognizing this and so we're seeing these regulations start to appear.
As far as how you get alignment across these stakeholders, that's the million-dollar question. When you think about payers, technologists, providers, patients, there are naturally some incentives that are a little bit different. We have to be open to discussing those things and working alongside different stakeholders to create meaningful regulations in this space.
One thing that I will say is that these regulators are not likely to make a ton of distinctions between various industries and various use cases. We have to recognize that if an AI regulation is passed, it's likely going to impact health care technology vendors. It's going to impact payers and it's going to impact the clinicians or health systems that are actually deploying the technology. We really do need to work together to find some sort of consensus on a variety of different issues where there might be a little bit of disagreement in terms of how these things should be thought about.
Something that we're spending a lot of time thinking about and an area where there might be a little bit of discussion that could be had is related to liability. Who is ultimately responsible or liable for the AI if the tool does not behave appropriately. If there's a safety event, if there are quality concerns, who is responsible for the AI? How we, at least on the developer side and within the electronic health record (EHR) space, think about it is that this is why it's so important that we carve out specific regulations.
Health care is such a unique industry. We're not talking about general consumer-facing technology that's being used by daily end users. We're talking about a technology that's going to be deployed in a health care setting. If you think about the clinician use cases, it's going to be deployed and utilized by well-trained clinicians. We have to take that into account as we think about ultimate liability and responsibility, specifically for the ones that are going to be used at the poin- of-care, it's really important for that human in the loop or a meaningful review by a well-trained user in order to ensure safety. At least right now, we just don't have enough data to really support that.
I don't think anybody in the health care community is looking for autonomous providers to be taking over the health care industry. We have to recognize the fact that there is a trained clinician at the end who is using these tools. How do we make sure we build that into the liability frameworks, if there are liability frameworks that are going to exist? It's really important that we think about those things across the various stakeholders.
That's where there's going to be some challenges as we start to try to develop meaningful AI. I will say that, at the EHRA, one of my roles working on getting groups like the American Medical Association (AMA) together, groups that are representing the providers or representing technology developers, and working together to understand our goals and how we can work together to help drive meaningful regulations in this space.
How can regulatory bodies and industry leaders collaborate to establish clear guidelines for AI validation, safety, and clinical integration?
Dr Speicher: The biggest thing is going to be open lines of communication between various stakeholders as these regulations are being developed and ultimately passed, and then figuring out how do we actually comply with them. There has to be discussions at various levels with these regulators to help them understand what our goals are and what we're actually doing here. We have to think about that across the various players here.
As I think about the technology development space and technology developers in general, not all technology developers are the same. AI is being developed in a variety of different capacities, whether that's within a large health system, within a major technology company that already exists, in a very large EHR vendor, by a very small EHR vendor, or by a startup company. [We have to be open to] understanding all these different players and how we plan to regulate across this space. We have to be open to communicating about that and understanding how these things are being done.
One thing that we've seen in some of the pending regulations is they're starting to define, with a little bit more granularity, all of these different levels. Are you developing the model? Are you developing the actual tool? Are you just integrating this tool into your existing software system? Are you deploying this? Are you the end user? Where do those regulations lie? Coming up with some common taxonomy so we can make sure that we're talking about the same thing when we're speaking about these concepts is going to be really important.
It all goes back to education, open lines of communication, and making sure we're all showing up at the same conferences to have those discussions. That's where this is so important and where I see a lot of opportunity in the coming years as these regulations start to pass and roll out amongst the states.
How do you see AI reshaping cancer treatment pathways and decision-making?
Dr Speicher: Oncology is one of those areas that's most ripe for AI technologies. I will say, as I'm working for an oncology-specific EHR company and working alongside a ton of community oncologists as they are trying vigilantly to practice really high-quality medicine, they are so excited about these tools. Most clinicians are really excited about these tools.
I tell people all the time, I still practice, and as clinicians I feel like we always feel that we're left behind in terms of technology development. We are the sole users of fax machines, we always go back to that reference, but we always feel like we are behind the ball when it comes to technology development, specifically these digital technologies. Now, for the first time we feel like we are poised to utilize these tools and are ready for them. Specifically, as I think about oncologists and oncology, these oncologists are excited and ready to utilize these tools.
What we know about oncology as a specialty is that evidence generation, new treatment decisions, and overall guidelines are evolving at a rate that is really hard for any normal human to keep up with. This is a prime example of where technology has an opportunity to play a part. We did a recent survey of EHR vendors, and this was similarly seen in an AMA survey, looking at clinician sentiment and what they're most excited for when it comes to different tools. Clinical decision support still continues to be one of the things that clinicians are most excited for.
We've had these treatment pathways and these decision support tools, historically, based on very basic matching algorithms, but we're evolving to start to think about how can we take them to the next level? How can we start to leverage AI in these actual tools? That's where there's so much opportunity, and that's going to be what the next stage of decision support technology looks like.
It's no longer basic matching based on guidelines. How do we take into account the plethora of data that we have, and how do we build out much more sophisticated technology? As I think about the way we do cancer treatment pathways right now—and I think this will always be the backbone and will always be the core of these pathway programs—it’s institutional knowledge, organizations like NCCN and their guidelines. It's always going to be guideline concordance, guideline compliance. That will always be a part of what treatment pathways should look like. But what does that potentially look like when we layer in some AI, make it a little bit more sophisticated to help us with understanding the patient in front of you?
We understand that patients are so individual, and we can get to the next stage of precision medicine as we think about leveraging AI. What does a pathways tool look like that obviously uses a backbone of something like the NCCN guidelines, but then starts to layer in this plethora of data that we have and starts to layer in, for this specific patient, because of XYZ criteria, have you thought about these different things? It starts to function as this tool for clinicians to make informed decisions and help them in the decision-making process. Not tell them what to do, not putting that in front of them and they just click the button, but to help to guide their decision-making and bring all the information and data to the point of care so you can really feel confident in those decisions.
Personally, when I was practicing full time, those daily decisions can be so debilitating. You're just worried about, am I making the right decision? You're thinking about it, and thinking about it, and thinking about it. If there are tools that can bring all of that data, all that information, to the point of care so you can feel more confident in your decision, your patient can feel more confident in the decision that's being made for them. That is what I think the future of decision support can look like across treatment pathways, diagnostics, and all of these different areas of the patient journey. Making sure that you are customizing a treatment pathway and a plan for a patient is so exciting, and an incredible use of this technology.
What advice do you have for healthcare organizations looking to adopt AI while ensuring regulatory compliance and patient-centered care?
Dr Speicher: This is going to be a huge challenge. I already know it's a huge challenge for health care systems, for practices large and small. How do you vet these various vendors? How do you make sure you have safe deployment of these tools? How do you choose the right vendors? How do you make sure that they're being utilized in the right way? There's also this whole new layer of regulatory compliance. I'll hit those things a little bit differently.
Let's start with the regulatory compliance piece. If I'm speaking directly to a health care provider, one huge trend that we've seen is there's a lack of strong federal regulations currently for AI and health care. That might change over time, but right now, that's the state that we're in. Therefore, the various states are starting to recognize this gap and they're starting to roll out state-specific regulations. So, I would say understand where you are practicing.
If you're in a single state, that's just one state's regulations that you need to follow. If you have a health system that goes across multiple states, you're going to really want to pay attention to what those different states are doing in the AI space because some of those regulations might apply to the vendors you partner with. It might apply to you specifically as a deployer or user of AI. You do need to recognize that there are going to be differences on a state-by-state basis, at least in currently how we think about this. So, being aware of the fact that there may be a state regulation, and then also just trying to follow that. I know it seems like a lot of work, but it's important, as you think about rolling these tools out, to keep in mind the regulatory frameworks that exist on a state-by-state basis.
As you think about safety, quality, how to vet these tools, in the same way that you are vetting any new technology, I know that most major health systems and large practices have some sort of process by which they bring in new emerging technologies, whether it's on the software side or the hardware side. I would say having a carve-out for AI tools and starting to develop a series of questions related to the AI tools that you're assessing [is important]. We can go back to those initial trends that we're seeing on the regulatory side, but you want to sound sophisticated when you're talking to these vendors.
As a vendor myself, I love when practices push us on our technology and ask questions about the model and about how my clinicians are going to have visibility into the model. Is there transparency there? Ask questions related to monitoring for drift and how the model evolves over time. Ask questions about how these tools are going to be deployed. Do you have an operations team that's going to come out or an implementation team that's going to help us implement these tools effectively? How are my end users going to get trained? There's a big hesitancy amongst clinicians, nurses, pharmacists, if they're using these tools. This is a new technology for them that they may be using for the first time. They want to be more educated on the tools and how to safely use them, so how, as a vendor, are they going to make sure that there is significant training for the providers? Ultimately, it is the responsibility, at this given moment, of the health care organization to vet these tools, assess these tools, and to do your due diligence.
That being said, the biggest takeaway I would say is definitely lean into this technology. While I'm sitting here talking about all of the potential risks and the regulatory environment, I'm so excited about the use of this technology in health care and health care delivery—for my colleagues, for my friends, for our customers at Flatiron. They are excited to use it, and so I would say definitely lean in. Do not be late to the game on using this. We're not going to know how safe, effective, and high-quality these are until we get them into the hands of clinicians. Also, elicit feedback from your clinicians. Find out how these tools are being used, if they're any good, and then bring that feedback back to your vendors. But definitely lean in.
One of my biggest concerns, as we start to roll these tools out and as these tools become more adopted in day-to-day routines for clinicians, is: Are we going to contribute even more to the ongoing digital divide in health care? We work very closely with some very small oncology practices, and I want to make sure that they have the same opportunities to utilize these AI technologies as our largest practices with built-in IT infrastructure and as the major health care systems across the country. It's so important that there's equitable distribution of these tools across these practices.
One thing that I know for certain is that these small practices have the same need, if not more, than these massive health care systems for these emerging technologies and for AI to help their day-to-day operations. I'm incredibly excited.
I would say, for all the health care systems and health care providers out there, definitely lean in. Do not be afraid of this technology. Start using it. Use it in your day-to-day life so you can understand it, build it into what you do. It's not something to be afraid of. It's not something that I think is going to take your job. It's something that's going to help you be a better physician, better health care system, and deliver higher quality care. That's why I'm so excited about it.