Episode 09: Transforming Physician Culture

Why can it be so difficult to change behavior in physicians? How can “unremarkable AI” be seamlessly integrated to improve clinical care? Join host Karen Wolk Feinstein and physician Dr. Seth Wolk, adjunct professor in the Department of Health Management and Policy in the School of Public Health at the University of Michigan, and technology design researcher John Zimmerman, Tang Family Professor of Artificial Intelligence and Human-Computer Interaction in the School of Computer Science at Carnegie Mellon University, as they explore the human context for medical error – and how we can get physicians on board with innovative technologies that could make the healthcare system safer.

Listen to this episode on: Apple Podcasts | Spotify

Up Next for Patient Safety Episode 09 Transforming Physician Culture

Featured Speakers

Referenced Resources (in order of appearance)

Episode Transcript

[00:00:00] Seth Wolk: Generally physicians as a group are highly, highly skeptical…

[00:00:10] John Zimmerman: Often it’s not their fault. It’s the way that the technology has been presented to them…

[00:00:18] Seth Wolk: There is an overarching culture of medicine, but it’s not monolithic…

[00:00:25] John Zimmerman: I think we’re going to see a shift in who goes to medical school and what medicine practice looks like…

[00:00:34] Karen Wolk Feinstein: Welcome back to Up Next for Patient Safety, where we envision a world where medical errors, adverse events and preventable harms are avoided, and where we examine the most promising paths to prevent these tragedies before they occur. I’m your host, Karen Feinstein, CEO, and president of the Jewish Healthcare Foundation and the Pittsburgh Regional Health Initiative, which is a multi-stakeholder quality collaborative.

We’ve been working to reduce medical error for over 20 years, mostly unsuccessfully, but we can’t give up because there’s too much at stake. And that is the loss of approximately 250,000 lives a year and long-term injuries for many more.

Today we have two guests from different backgrounds. One is a surgeon and former healthcare executive, and the other is a Carnegie Mellon professor focused on human-computer interaction. We brought them together to discuss how, or even whether it’s possible, to bring about physician behavior change. I think often of the American Board of Internal Medicine Foundation and its Choosing Wisely campaign – I love this campaign.

The ABIM asked various medical specialties to reach consensus on what were best practices, meaning these are things you should do, and what are the worst, meaning don’t do this. Even so, with all these consensus guidelines, some physicians ignore them, they push back against them. So it’s often discouraging when it comes to introducing new behaviors or technologies, even when they can make healthcare safer and practice easier. The pushback can be discouraging. So we’re going to talk about this.

So let me introduce, first of all, Dr. Seth Wolk is an adjunct professor in the Department of Health Management and Policy in the School of Public Health, University of Michigan. Formerly a board-certified vascular surgeon, he earned his medical degree from Harvard Medical School and completed his general surgery residency at Mass General Hospital in Boston. Following this, he pursued a fellowship in peripheral vascular surgery at the Mayo Clinic in Rochester, Minnesota, some pretty good credentials. Dr. Wolk also has a graduate degree in Healthcare Services Administration from the University of Michigan School of Public Health. He served as a physician fellow at the VA National Center for Patient Safety. He is the retired system chief medical officer at Spectrum Health, an $8 billion not-for-profit integrated health system based in Western Michigan. In that role, he advanced physician participation in delivering high quality and more efficient care.

Now we’ll go to John Zimmerman. He is the Tang Family Professor of Artificial Intelligence and Human-Computer Interaction in the School of Computer Science at Carnegie Mellon University. I should say the esteemed school of computer science. He conducts research on human-AI interaction and human-robot interaction. He’s particularly interested in how professionals engage with AI to make critical decisions and to improve their work practice. He teaches courses on service design, user-experience design, and the design of AI products and services. His research has been funded by the National Science Foundation, National Institutes of Health, Department of Transportation, Department of Education, as well as by Accenture, Bloomberg, Google, and IBM. He’s also currently a part of the NSF AI Institute of Caring.

Welcome both Seth and John. So I want to begin with some questions. Seth, you’ve worked with physicians throughout your entire career first as a surgeon yourself, and then in an executive position at Spectrum. Could you start it off by telling us why could it be so difficult to bring about behavior change in physicians? I think of my own feeble efforts to get physicians to always wash their hands or observe sanitary precautions. I have to admit there was some frustration there. Is there a reason why physicians resist guidelines and directives?

So here’s a string of questions also, does physician acceptance of guidelines or assistive technologies or any update to their practice differ by specialty? How do you explain the outsized acceptance of safer practices by anesthesiologists? How can we get more specialties to own their safety problems and work aggressively on solutions?

[00:05:05] Seth Wolk: Well, thank you for inviting us both. Both John and I welcome the opportunity to discuss these issues with you. Let me start by saying that generally physicians as a group are highly, highly skeptical. They’re not born that way, but I think that occurs during their training, especially in medical school, when they use learn what’s called differential diagnosis, which essentially is extreme form of deductive reasoning. A physician starts out with a wide array of potential diagnoses that could explain a patient’s presenting signs or symptoms and very quickly they narrow this down to one or two potential diagnoses. But they also bring this form of reasoning into many other walks of life and basically they are quite skeptical. So whenever anything is being introduced, a new initiative for example, usually they are thinking they are going to present the 50 or 99 reasons why this won’t work.

Additionally like most other highly trained professionals, they have a very strong sense of autonomy and a strong sense, a bias for the status quo. Therefore, changing their beliefs and behaviors is hard, but it is possible. And fundamentally you have to – what I would advise is – you have to commit to deeply understanding them and at least be willing initially to meet them where they are. There is an overarching culture of medicine, but it’s not monolithic. In reality, each tribe or specialty has its own strong and unique subculture. And with its own associated beliefs, lore language, et cetera. You know, specialty residency training would be called a cult by many in our society, but one very pertinent, relevant example of these unique subcultures is where they fall onto what I term the unique craftsmen versus equivalent actor spectrums.

Karen, I’d asked you the last time you boarded a commercial flight when you stepped through the door front, did you ask for the name and qualifications of the pilot and copilot? And I suspect the answer is no, you generally assume that they, that all pilots and crew have equivalent excellent training, and you are comfortable with that. So comfortable with that, that you literally were willing to trust your life and to then sitting in an aluminum tube five miles above the earth going 500 miles an hour. So in medicine, in these different subcultures, let’s talk about potentially equivalent actors and specialties that have adopted this. And those specialties are specialties such as anesthesiology, radiology, pathology, and laboratory medicine, where by and large, they consider themselves equivalent to one another and referring physicians and patients do as well. And not coincidentally, these specialties are also the ones that have made the greatest progress over the last three decades in terms of making, delivering their care in a safer fashion.

Unfortunately, at the other end of the spectrum, we have surgery. Surgeons believe that they need to develop a unique brand, that they are unique craftsmen, that what they do is somehow different and hopefully better than their colleagues, and therefore, that’s how they’ll build their name and reputation. Unfortunately, one of the side effects of this has been, you know, very relatively slow progress in adopting many of the aspects of modern patient safety and systems thinking. Specifically, understanding how systems are designed, implemented, maintained, and why they may fail. That’s the best answer I can give you to your first set of questions.

[00:08:47] Karen Wolk Feinstein: And I do fall asleep comfortably on airplanes before they even take off. So it is, you bring up, it was interesting… I was on the Medical Ethics committee here at University of Pittsburgh Medical Center when Tom Starzl wanted to switch from cyclosporine to FK506. And you know, we got very upset saying “you weren’t following guidelines, we need a long randomized clinical trial.” And he said, “no, I found cyclosporine and now I found something better and I am going to use it.” And it’s difficult because he was right, you know? So it’s a challenge and I hear what you’re saying, there is a culture.

John, your work in healthcare focuses on decision-making in clinical settings and in particular, the physician-computer machine interface. Can you talk a little bit more about your research and give us some examples? Have you encountered resistance to technology even when it could make the physician’s work easier, safer, and more reliable?

[00:09:44] John Zimmerman: Ah, thank you, happy to be here and share what I know. My work in healthcare lately has looked at decisions by cardiologists to implant mechanical hearts. So it’s a life and death decision and I’m also doing some work right now, looking at the ICU and adherence to the protocols around cutting sedation and running breathing tests so that they can extubate patients from ventilators. It’s really exciting to sort of work in these spaces and think about what computing can do to help people. I would say I’ve met all kinds of people that are super resistant to help, particularly people who have a lot of expertise. And I think often it’s not their fault, it’s the way that the technology has been presented to them.

So a number of my colleagues more on the computer side make a certain set of assumptions. One is that doctors know they need help, that they have free time, that they’re going to walk up and use a computational system to get an answer to their question, and then that they’re going to be super thankful. And I do sometimes wonder, have they ever actually met a doctor? Because just spending a little bit of time in clinical practice, you see people are very busy. A lot of decisions are team-based, they’re happening nowhere near computers. And I think my community has not spent enough time really understanding what the experience of practice is like, the pressures that people are under. And to understand where is it that they have uncertainty, where computational insight might actually be valuable.

And in general, like my whole field of human-computer interaction, we pretty much never blame the users. So if we have users that are resistant, that means we’re making the wrong technology and we need to rethink how it works.

[00:11:43] Karen Wolk Feinstein: You sometimes use a term, your work focused on the theme of making quote “unremarkable AI.” Can you explain that principle and what it means for healthcare?

[00:11:57] John Zimmerman: Sure, typically today when AI systems are designed in healthcare, they tend to want a lot of attention and they sort of want credit for what’s happening. And we’re trying to shift that mindset a bit and borrow from some work from Mark Weiser and Peter Tolmie, who talk about unremarkablility similarly to electricity. Like in general, none of us think about electricity. It’s truly unremarkable until we don’t have it and then we realize the tremendous dependency we have on it. But electricity, while it’s involved in everything I do never gets any of the credit, nor does it really want any of the credit. And so thinking if we’re working in human systems, we need to design them to be respectful of the people.

So with the work we were doing on VAD implants, not unexpectedly, physicians were skeptical that the predictions we were making were going to be valuable. And they didn’t have any free time to do the work. So what we did is just looked at their work practice, noticed that they had weekly meetings where they discussed which patients were or were not going to be surgical candidates for an implant. And we designed a system that automated the creation of the slides used in their meeting, and because we did that, we could sort of own the visual real estate. And we put a survival curve in the upper right-hand corner of the slide sort of subtly there, and mostly, it’s just agreeing with what the clinicians want to do.

So it’s really not in their way. They can keep working at their normal pace. Occasionally, it’s going to disagree with them and what it’s doing there isn’t really telling them they’re wrong. It’s just making a prediction and it’s creating a little bit of friction, maybe for a case where they might slow down, maybe this case is a little less textbook than it appears on the surface. And so that’s really what we’re trying to do is mostly be out of the way, but then step in at just the right moments to offer suggestions, not to try to make decisions for people.

[00:14:08] Karen Wolk Feinstein: Thank you. That’s a very interesting observation. So Seth, I’ve noticed over time that physicians often resist efforts to bring other disciplines into their world to make medical care safer.

So I know we tried to introduce some Six Sigma black belts into units to work on lean quality engineering, and it was not well received. Some have suggested that this overall reluctance to engage, say human factors experts, quality improvement engineers, and other safety scientists in building safer hospitals, safer practices, safer nursing facilities, has held healthcare back. Other industries definitely work on safety by being deliberately interdisciplinary. Any idea how we could bring a welcoming to introducing some other disciplines into medical safety?

[00:15:04] Seth Wolk: I would certainly agree that many physicians, but certainly not all, continue to resist expertise arising from non-clinicians. They believe that other industries operating in complex high-risk environments, such as nuclear power, commercial, aviation, et cetera, that have made great strides in reliability are not analogous to the practice of medicine and the delivery of healthcare. Certainly some exceptions can be pointed out such as an airliner’s ability to shut the door when capacity is met or exceeded. But fundamentally, the issue here is the acceptance and embrace of systems thinking.

During their medical education and residency training, most physicians develop a belief system that they alone are responsible for an individual’s patient’s outcome just as a pilot has his or her hand on the control column when an aircraft touches down. The conclusion of a successful flight is also dependent on a highly complex system including ticketing, baggage loading, weather forecasting, flight routing, control systems, you know, go back even to the financing of aircraft, I mean it’s a very complex system. In other words, the pilot skill is necessary, but not sufficient for a successful outcome.

The four years of medical school and subsequently, residency specialty training is not an efficient experience. The current system was designed to teach students, how they as individual, independent actors should care for patients because at the time when the architecture of medical education was defined over a century ago, doctors largely worked independent of each other. Today, doctors work within a very complex systems. There are processes through which multiple caregivers interact. These processes are embedded within value networks that define how the various actors interact. It varies by hospital, medical school, specialty department, but in general, there’s little in medical training of physicians that teaches them how to create administer, lead, and improve the way people work together in today’s healthcare systems.

[00:17:23] Karen Wolk Feinstein: It’s interesting years ago, we were running a cardiac registry for cardiac surgery, and it surprised us that the same surgeon would get different outcomes in the different hospitals where they operated with different surgical teams. So in some ways the data suggests that, you know, it’s not a one-person show that probably didn’t do a lot to change the culture.

John, what can AI do that goes beyond telling a doctor what they should do beyond the alerts that people are starting to push back on? No, not starting, they’re pushing back on the regular alerts, but you’re probably aware of this. What are the many ways that AI-enabled safety could serve physicians?

[00:18:10] John Zimmerman: Well, I think there, there are a variety of ways that are largely unexplored because of the way the data is encoded in electronic health records. It’s very easy to make predictions about clinical decisions because that’s what there’s great ground truth for, but there are still some other opportunities. So in the work we’re doing in the ICU, we can see from the data that lots of patients aren’t given the wake-up-and-breathe protocol. So when the doctor arrives ready to make the decision, should this patient be extubated, that patient isn’t, hasn’t been prepared enough for them to make the decision. So sort of the window closes.

And we think of this as expert disagreement because either the respiratory therapist didn’t think this patient should have the test or the nurse didn’t think this patient should have the test. So could we actually predict when we’re likely to get expert disagreement, surface it so that the clinicians can come to an agreement so that we’re not missing those delivery window?

Paying more attention to the sort of temporal aspects of care, but also understanding that there are group decision points and you can build structures that better support that. Other things that we’re working on to help with coordination between the respiratory therapist and the nurses to do this cutting up sedation and breathing tests are to generate, prioritize sets of patients that are eligible to give more predictability of when is someone likely to be somewhere so everyone can kind of sequence their work a little more effectively, and there’s less sort of waiting for someone else to take the action that you need before you can do the action that follows.

[00:19:59] Karen Wolk Feinstein: That’s so practical. I mean, I can’t imagine why anyone would resist that, but also, I mean, as you’ve said, there’ve been some really interesting examples of how AI can help with prediction, risk stratification, forecasting, customization, you know, not telling someone what to do, but giving them a sense of what is likely to happen to the patient and what is likely to happen with different interventions.

[00:20:25] John Zimmerman: One thing I would throw out as an example, and it’s in the space of persuasive technology. I’m sure you’ve been driving and you’ve seen a sign that shows the speed limit, and then it shows you your speed, right? It’s not giving you a ticket, but it’s clearly sensing how fast you’re going. It’s like a reminder of what the standard of care is at the moment and that you seem to be deviating. So it’s showing the awareness, but it’s not trying to tell you what to do. But it’s very persuasive and behavior changing much more than sort of, if you’re giving people tickets, then it becomes about avoiding the spot where you’re being monitored. And I think use of more persuasive technology as opposed to confrontation – I think of alerts is very confrontational – makes the work function much more effectively.

[00:21:17] Karen Wolk Feinstein: Just in that sense for both of you, if indeed – and we’ve seen that physicians tend to push back often on practice standardization – but those of us in the safety world, kind of think of that as a key to safety and predictability and effective teamwork, et cetera. How do we put this together? How do we advance healthy standardization without killing healthy innovation?

[00:21:43] Seth Wolk: John, please go ahead. I know I’ll be that much smarter after you’re finished.

[00:21:50] John Zimmerman: So I think standards of care are always reductive. They can’t get the nuance of all of the situations and so that becomes the inflection point because it’s never going to be perfect. It’s never going to have a full, rich understanding of the situation. The intuition of the experts being disrespected with this cookbook approach to medicine, you’re sort of removing the chef’s expertise, and so again, I think this is where persuasion is more helpful than adherence. They may have an excellent reason for deviating that is in no way sensible. But encouraging them to document the rationale you can begin to capture that and then turn this into a discussion of context to say, “oh, within this ICU, we want to discuss what our standard of care is based on the documentation of where people are deviating. Do we actually think the standard should be updated?”

So one great example that we saw in a children’s ER, they get a lot of concussions on Sunday. So these are Pittsburgh kids that come from out of town to play hockey, they get a concussion, there’s a protocol they should follow… a lot of it could be pushed off to the child’s pediatrician that they can pick up on the weekday as opposed to the families sort of waiting in emergency services on a Sunday night, which is super inconvenient. So not surprisingly, doctors are deviating from the care pathway as it’s documented, but in a perfectly reasonable way. So capturing those deviations and surfacing that so people can have those conversations and you can sort of see, well, Sunday evenings are different than other times… humanize how standards become operationalized within organizations.

[00:23:48] Seth Wolk: I would say again, many, perhaps most physicians, react negatively to the term standardization and generally push back hard. Initially, I thought this was because they mistakenly equate the term standardization to averageness. However, over the years I’ve come up with a frankly, a different explanation, and it goes as follows, that although the first two years of medical school generally are the same for students as they take the same basic science courses in the same orders, years three and four consisting of clinical rotations introduce extreme variability. The sheer number of scheduling combinations that future medical students experience virtually guarantees that all students have a different combination of experience and skills by the time they graduate. This extreme variability makes it difficult to teach students effectively even more difficult to verify the skills before students enter residency programs and almost impossible to figure out which parts of the education process need to be fixed.

Additionally, in my opinion, medical schools intentionally seek, even prize variability in what students learn. Their current methods identify the best students so that they can be tracked into the best residency programs. This approach has actually perpetuated variability in the training of our physicians. Residency training programs continue this fixed-time variable learning model. This is obviously the opposite of modern manufacturing processes with its emphasis on standardization. Therefore, given how physicians are trained, we shouldn’t be surprised by physicians’ unfamiliarity with the principles of standardization.

Balancing innovation and standardization is essential to move healthcare delivery forward. A danger, as John mentioned, of standardization arises when it becomes an all-consuming and in itself, a constant push to standardize can result in inadvertently stifling creativity and innovation. Freedom to innovate, however, cannot be freedom to do whatever we feel like doing. Standardization provides a foundation on which innovation can build. Think of standardization as a core set of tools and practices one might apply to all patients. Innovation can take the form of tools and practices that go above and beyond the standard. This will enable every clinical team to extend the core set of standardized tools and processes to meet the individual needs of the team’s own specific patient.

[00:26:25] Karen Wolk Feinstein: Just say, given the possibility John and Seth that we have algorithms that can get really good in large datasets at detecting outlier behavior and also determining whether it results in better outcomes or worse outcomes, just suppose… and the algorithm also… could it ever just create a customized education, coaching, training program for physicians whose outlier behavior is producing poor outcomes?

Is there something that the various specialty societies, Seth – licensing boards, medical associations – is there something they could be working on with people like John and the machine learning center to customize education so that everyone doesn’t get the same dose of the same intervention, the same medicine…  but actually people get in real time, maybe an updated learning experience in something that is indicated as needed?

[00:27:32] John Zimmerman: Let me give you two very different answers to that. So I’m super excited about your vision, but I think there are sort of two challenges. So in my own area, you know, I’m a college professor, college professors teach… we have 25 years of awesome research showing that lecture is a very ineffective method of teaching, particularly in STEM, but the number one way that stem is taught in universities is through lecture. And largely that has to do with how professors are recognized for excellence and rewarded, which is not for deviating from lecture in their teaching. So as long as the reward functions aren’t aligned with what you want. So if, if a goal is really to get people to adhere to standards of care, then they should be rewarded for doing that.

And that is not how excellence in medicine is currently rewarded. Additionally, if you’re thinking about this as personalized learning, again, I’ll use my own area, computer science. So a number of years ago, we were asked, can you produce students that are better leaders, better presenters, but still amazing computer scientists? And my colleagues who are socially dysfunctional, like introverted, narrow but amazing abstract thinkers were like, sure we could train them to do that. And it’s like, this is not an education problem, this is an admissions problem. We need to actually attract a different kind of student that arrives with a very high social and emotional intelligence, but also with that ability to do the abstraction. Like if that is what you want, it’s actually a different starting place. And again, in medicine, I suspect as the diagnostic aspect of the work is continually overtaken by computers in the same way it’s basically been an investment banking, right? Computers are much better at paying attention to a billion numbers simultaneously than a human. The work shifts to be, how are you at interacting with patients?

And I suspect in the future, the ability to be an effective counselor and communicator that can describe what a patient’s going to go through and be the coach that helps them become a more active participant in their own health will become increasingly important over the ability to be excellent at the diagnostic aspect of the work. It’s all hard, but the tools are sort of taking over the mathy part… the human part, it’s never going to overcome. And so I think we’re going to see a shift in who goes to medical school and what medicine practice looks like.

[00:30:28] Karen Wolk Feinstein: Oh John, you teed up two big issues for Seth. One, admissions, are we looking for the right thing in our physicians of the future? Particularly, as John suggested, that social, emotional connection to patients. And then secondly, are the specialty societies gearing up for more customized education instead of, you know, let’s take the worst case, everyone goes to a fancy resort or takes a cruise and gets the same lectures and goes home? That doesn’t seem in the modern era the best way to keep people up at the peak of their practice. So we’ve teed up two big ones for you.

[00:31:11] Seth Wolk: Certainly the question of admissions is an interesting one. By and large, extremely talented people are still admitted to U.S. medical schools, what the subsequent curriculum looks like and whether it ultimately will address as John was suggesting that we provide – the curriculums provide a tremendous amount of information – but is it information they need, or is it other types of skills? Some of which John mentioned, things such as, you know, curiosity, compassion, change management, we can go down a long list… how well are students actually being instructed?

And I suspect we could probably all agree in present state that there’s a lot of room for improvement. You brought up the issue of specialty societies, and in my opinion, specialty societies in today’s world typically serve two functions. One is to hopefully improve the clinical care of patients that are in that specialty’s domain. The second is to enhance or protect the financial interest of the physicians who practice in that specialty. Either one of those is bad, both are good. The problem is that they can often come into conflict. And in general, societies I think struggle to recognize and deal effectively when those conflicts occur. I’d certainly like to see the societies do a much better job of addressing the concerns we’ve been talking about. However, at this point, relatively few are I think are actively doing so.

[00:32:55] Karen Wolk Feinstein: Well here’s an opportunity both of you to come together, but yeah, I do think our maintenance of certification and our continuing education efforts may be ready for an update. So now I’m going to ask a really tricky question – let’s see who wants to answer this – but do you notice a generational gap or a gender gap among physicians, meaning are younger or women physicians more likely to accept, change, to accept new technologies even to adhere to standards, safety guides? Who wants to take that one?

[00:33:33] Seth Wolk: Well, I’ll go first. John was gracious enough before to lead off. But so Karen, I suspect these questions have been and are continuing to be studied, however, I think I have some personal experience to add to the conversation. I’ve certainly heard many state that physicians are technology-phobic, especially those in my generation. However, I respectively disagree. Physicians quickly adopt the technologies that are what I call task-specific, such as mobile phones, phones, with sonography apps, surgical stapling devices, endovascular devices, et cetera. However, most push back on technologies where the benefit of mainly accrue at the organization or population level.

And this is a critical area for organizational leaders that they have to play a more effective role in being able to explain this and being effective with change management. It’s difficult to change the behavior and beliefs of professional colleagues – as mentioned have a strong, strong bias for the status quo – but leaders, especially those at the local level, at the practice site, or who I call the core leaders of an organization, must be able to clearly articulate how various forces that are changing healthcare delivery, such as digitization, precision medicine, consumerism, and obviously artificial intelligence, how these forces can enable clinicians to take better care of their patient in front of them. Take better care of their colleagues and take better care of themselves. I think that’s possible.

[00:35:10] John Zimmerman: I love that answer. I’m going to add on it hopefully in a very complimentary way. I would definitely say there is a generational difference and that we want to keep it. That often the younger practitioners are very excited about new systems coming online, but lack sort of institutional knowledge or kind of the depth of practice to recognize the actual problems or unintended dependencies those will produce. And it’s in the friction of the dialogue between the two that it’s easier to tease out. What are, what is the introduction of new technology that’s simply not helpful and maybe has potential negative impact? And what are those things that really are truly valuable?

And the things that are obviously valuable generally, you don’t get resistance from anybody, but a willingness to try our more experimental approach, like, “well, let’s just use this and see.” Definitely we see much more of this with younger doctors, younger professors, just youth in general. And I think the youth become an opportunity to prototype because their willingness to kind of wait, like waste more of their time trying to learn it, they actually then develop the best practices in getting the most benefit from that technology. And that’s the point at which suddenly it becomes valuable to the older practitioner. So I actually think that tension leads to better situation of technology in professional organizations.

[00:36:51] Karen Wolk Feinstein: Well, I thank you both for really thoughtful responses to that. We did, earlier today actually, I looked at some research that suggested that in terms of difficult diagnostic, you really want an experienced clinician and not the younger clinicians. So I love the answer that there’s a role for both and a wonderful meeting of the minds when there is that interaction that allows for maybe different perspectives to be heard and brought forth.

So it has occurred to me in some ways, technology did have a role in reducing some of the pandemic effects. We know that for physicians, the impact of this pandemic has not been good. The sense of burnout, frustration, just the stress every day of maybe – you know, my son-in-law’s a surgeon and the nurse this week, his surgical nurse, couldn’t make it, she’s diagnosed (with COVID), and that means of course the surgical team has to respond to that – but I think of the things technology might’ve done if we had… if they were there to enable our response to the pandemic, whether it’s having equipment and supplies and expertise in the right place. Doing the prediction and risk analysis of who’s going to deteriorate and need various interventions, and just overall surveillance and forecasting. We were talking this morning – I’m on a different conversation about how valuable our sewage is to predict where the pandemic is going in a certain geographic area – but there’s so much technology can add that might have lessened the burden of this pandemic. So, you know, there are good things to things to be aware of and there’s a respect for the culture, as Seth said, that has upsides and downsides.

So thank you so much to both of you for exploring an incredibly interesting, sometimes complex issue, but one that is critical, I think, to those of us who are focused on safety. Where technology has a role to play, let’s hope that we don’t resist it. Where other disciplines have a contribution to make, I would like to hope that we can welcome them, but where the respect for what the physician does bring uniquely, age, gender, training, their approach to their practice, is also respected. So thank you so much. Thank you, Seth. Thank you, John. And I know and hope you two will continue your own dialogue.

[00:39:35] Seth Wolk: Thank you.

[00:39:36] John Zimmerman: Yes, thank you, it was wonderful to participate.

[00:39:40] Karen Wolk Feinstein: To learn more about the effort to establish a national patient safety board, please visit npsb.org. We welcome your comments and suggestions. If you found today’s conversation enlightening or helpful, please share today’s podcast or any of our other podcasts with your friends and colleagues.

We can’t improve the effectiveness of our health system without your help. You, our listeners friends and supporters are an essential part of the solution. If you want a transcript or the show notes with references to related articles and resources that can be found on our website at npsb.org/podcast/. Up Next for Patient Safety is a production of the National Patient Safety Board Advocacy Coalition in partnership with the Pittsburgh Regional Health Initiative and Jewish Healthcare Foundation.

It is executive produced and hosted by me, Karen Wolk Feinstein. Megan Butler and Scotland Huber are my associate producers. This episode was edited and engineered by Jonathan Kersting and the Pittsburgh Technology Council, thank you, Tech Council! Our theme music is from shutterstock.com. Social media and design are by Lisa George and Scotland Huber. Special thanks to Robert Ferguson and Steven Guo. Thank you all for listening.

Subscribe on your favorite podcast app: Apple Podcasts | Google Podcasts | Spotify | Pocket Casts