Formed in 2012, the Parkland Center for Clinical Innovation (PCCI) is a technology research and development organization affiliated with Parkland Health & Hospital System in Dallas. Healthcare Innovation recently interviewed PCCI President and CEO Steve Miff, Ph.D., about some of the highlights of PCCI’s 2026 annual report, which has a focus on predictive analytics and AI.
Healthcare Innovation: Steve, your annual report notes that PCCI has pioneered a novel framework to ensure trustworthy and sustainable AI development, and it now has 14 models in production, seven in testing, one in early exploration, and others under development. Could you start by talking about the development of that trustworthy AI framework?
Miff: It has evolved over time, but particularly as we started to deploy models, we noticed that it is something that’s required not only at the front end as you build models, but also after deployment as you continue to maintain and support them.
We’ve identified four key pillars that we believe are critical to providing the required transparency to create trust. One is prediction transparency. The second is performance transparency. Third is security transparency, and forth is compliance.
With prediction transparency, what we’ve noticed is that it’s great to be able to predict rising risk and the level of risk for individuals, but unless you are able to give the details behind what’s driving the risk, the information is useful, but not as useful as it could be, in terms of giving users the comfort that what they’re seeing makes sense. We develop this technology called “Islet” that enables real-time visualization of the information behind a model. With a click of a button from the electronic medical record, you can pop up a window that gives you not only the current predictive score, but the historical values. And then dynamically it brings forth the top five factors that are influencing the prediction the most at that point in time. And then it gives you all the actual data that’s feeding into that.
HCI: That’s interesting because we often hear from health system leaders that offering that kind of transparency is key to getting clinician buy-in.
Miff: Another pillar is around compliance. It is so important to make sure that any models that are being deployed meet the rigor of the latest compliance requirements. We’ve been part of the Health AI Partnership, one of the founding members with Duke and Mayo and Berkeley. They have published some really good criteria and rubrics about elements that should then go into both the compliance on the front end and then a lifecycle management of AI. We have identified a rubric of 20 to 30 different elements that we put every single model through before is being deployed and evaluated as an internally generated service.
The third pillar is around security. Whatever happens with the data needs to be in the secure environment, because you’re managing PHI and managing multiple data sources that need to come together. It is important to highlight that and constantly pay attention to it, and have all the the rigor, the accreditations and all those components in place.
The last one is around performance transparency. The more models we deployed, the more time we’re spending actually monitoring them to make sure that they perform according to however they were designed, how they’re trained, and that they’re not starting to deviate. That becomes overwhelmingly time-consuming, and we’re spending more time on monitoring things than actually having the ability to develop new things. So we built and are in the process deploying an AI monitoring dashboard that automates a lot of these statistical functions of the models that are being deployed. We’re also doing that now for LLMs and ambient listening models. It is important to be able to create those guardrails of what’s expected, from a statistical perspective and then be alerted when the model starts to deviate from the parameters that you’ve identified.
HCI: Can we walk through some of the AI innovations described in the annual report? But first I wanted to ask whether some of these innovations could be commercialized or exported beyond Parkland’s use?
Miff: Yes, we design them that way. We’re not ourselves a commercial entity, but we’re always looking to be able to replicate these in other environments. For example, our trauma mortality model, which is unique, is a little bit more niche because it applies to Level 1 trauma centers and predicts real-time mortality — we’re in the process of deploying that at Grady Health in Atlanta.
Another thing we’ve done with multiple entities and health systems, and even with payers, is the work that we’re doing with our Community Vulnerability Compass, which is really granular SDOH data, but it’s done at the block group level. We reverse geocode, and attribute to a patient record their block characteristics, so now we have it on 100% of the patients without the need to interview them. We just published a paper on this in JAMIA and it really showed that it has incredible recall rates, not only at the overall index level, but when you look at specific indicators, such as whether somebody has food insecurities or housing instability. It’s amazing to be able to take a block group information attribute to a record and then for that to be so highly indicative of what that person says. We have 50-plus organizations that now use it.
HCI: I read that is being used by the United Way in their data capacity-building initiative in the Dallas area.
Miff: Yes, it’s been a six-year journey with them. What I just absolutely love about that is that it’s foundational in multiple layers. United Way has been using it for years to track the impact that their investments in the communities are having, and track that year over year.
United Way also wanted to bring the community organizations in and increase their data capacity. Instead of just saying we’re going to pay for you to have licenses to access this, they found 200 organizations and put them into cohorts that go through a six-month curriculum to learn how to apply it to their specific situation. It’s been amazing to see. That’s exciting, because it is teaching people how to use data.
HCI: Let me ask about a couple of other predictive tools that are used in the hospital setting. One is a workplace safety prediction tool. Does it screen patients for the potential of violent interactions?
Miff: That’s what it does. As you know, violence against frontline staff is literally a pandemic. It’s gotten significantly worse after the COVID pandemic, and it continues to be a huge challenge. Many organizations are focusing on trying to alleviate the problem. This pulls it from multiple sources. This also uses the Community Vulnerability Compass data. It even uses things such as smoking status, previous involvement with criminal justice, or previous violent events. It pulls all this complex information together and basically predicts the likelihood that that encounter will result in a violent event. You have to be very careful that you’re not profiling individuals. You’re literally identifying triggers. This is one of the most vulnerable time in our lives, when we’re in the hospital for our own health or a loved one, and you add all these other things that compile that anxiety. For example, smoking always shows up in the top 10 predicting factors. All hospital campuses are smoke-free. If you are a heavy smoker and not able to smoke, that adds to your stress, and starts to create a higher risk.
HCI: You also have a pre-term birth prevention program.
Miff: The pre-term birth involved building a predictive model looking at underlying factors to identify women who are likely to have a pre-term delivery. Initially that program initiated both education to women via texting and alerts to their providers. Then a broader coalition came together to do more work in this space, and we are the analytical engine behind it. We’re using CVC that we model across these patients to understand the non-medical barriers and drivers of health. We’re modeling with data from a local source that’s called the DFW Hospital Foundation, where we have close to 100% of all pregnancies that occur across the two counties and the associated series of study complications. So we’re able to geocode and model those to understand where the highest density of these serious septic complications occur, and what is the makeup of those neighborhoods. One of the intervention is iron distribution to be able to give pregnant women iron very early in the pregnancy. We are using this to identify locations where the iron distributions take place.
We also build a maternal health forecasting model. Previously we had built a diabetes surveillance system, and we’re modeling it after that. The diabetes model predicts deterioration that will require ED visits and hospitalization 12 months out. It’s the neighborhood level, and it gives you both the medical issues that are driving that prediction and the non-medical drivers, and it ranks them, and it’s very dynamic.
HCI: Another one featured in the annual report is a digital imaging surveillance system that leverages generative AI to identify missed diagnosis for follow-up care. We have written about health systems that are trying to do a better job of following up on incidental imaging findings. Is this similar to those efforts?
Miff: There are hidden things in the notes from the radiology report such as incidental findings and Parkland’s been doing this manually for a number of years. We now use LLMs to scan through all those notes and identify these incidental findings. I think it’s amazing how robust the accuracy is — where it’s actually more accurate than humans doing this.