This transcript is made by autogenerated text tool, and some manual editing by Papers Podcast team. Read more under “Acknowledgment”.
Jason Frank, Lara Varpio, Linda Snell, Jonathan Sherbino.
Start
[music]
Jason: Welcome back to the Papers Podcast where the number needs to listen is one. The gang is all here. We’ve got literature for you. We have insights and we have Lara’s dental pain. So like you can’t go wrong for an episode like this. So Jon’s here.
Jonathan: Hi Jason. Hi Lara and your nitrous oxide.
Jason: Lara may be a little bit more giggly than usual. I thought it was just my jokes, but it’s apparently the nitrous oxide.
Lara: oh no.
Jason: Lara, how is your tooth?
Lara: I had a root canal and honestly, I think my dentist was really excellent but I’m just so not emotionally prepared for this reality.
Jason: Remember what I said, I do discount dentistry in case you
[Laughs]
Lara: discount dental, thats what I dont need
[Multiple voices talking at once]
want an alternative way of getting things done.
Linda: Her face is already crooked and swollen up on one side. You’re gonna have her and it’s swollen.
Jason: Yikes.
[Laughs]
Lara: It is swollen though and if we’re on video, it’s not a good day.
Jason: Awesome. Okay, so, but Linda’s got a paper. So, Linda?
Linda: This is a paper about mistakes. Now, when I first said this paper around, Jon said, we don’t need to do another paper on clinical reasoning, but that’s not what I think it’s about here. I think it’s about mistakes and whether we learn from them and how we learn from them and whether we learn from them in practice. I’ll tell you, the authors are Kotwal, Howell, Zwaan, and Wright.
The latter two names I know, and Scott was trained with us and has become an outstanding clinician educator. Laura Zwaan, think, is a guru of clinical reasoning, if I understand correctly.
Jason: Scott Wright is a man with many clever papers. You got to give him credit.
Linda: sure.
Jason: He’s the guy that did… He got a New England Journal paper, I think, as a chief resident or something.
Linda: Yeah, that was about our context. So the idea here is that we talk about mistakes and how we learn from them. So I’m going to ask you all in a minute to describe for me or reflect on a mistake that you might have made clinically, preferably or otherwise, and a success that you’ve had and what was the takeaway from it. But just as we’re thinking about mistakes or errors, we know that in medicine, errors are distressingly common. And many of these errors are diagnostic errors, not all like that. But if we look at errors as opportunities to learn and improve and not to assign blame, we should be increasing the competence of people because we’re adopting a growth mindset. And clinicians should continuously grow with feedback so they get feedback on their clinical experience errors or successes.
Now, before we get into some of the details of reasoning why this paper, let me ask you, I’ll go Jon, Lara, Jason, to reflect on a mistake or a success and the takeaway.
Jonathan: There’s a mistake that I led as a trauma attending very early in my career. And I delegated a diagnostic act to a part of our team member. If you know anything about trauma medicine, it really is something that’s done as a collaboration. You have nurses, have respiratory therapists, you have clerical workers, you have porters, and then radiation technologists.
Then you have a surgeon and then you have an anethetist and et cetera, et cetera.
Jason: You mean an x -ray tech? I don’t think you mean a radiation tech because that’s different kind of trauma.
Jonathan: Okay. An x -ray tech, but you know, what’s the CT tech, guess. You need a bunch of people. And I delegate an act and that person worked within their bounds and made the mistake. And I never had a system that checked it. And ultimately a young person died and I didn’t sleep for weeks.
And what I learned from that is how people can come around and support you. And I certainly have a slightly different trauma practice, but man, I’d love to not have made that mistake. I’m pretty sure I can learn that lesson a lot easier. And there’s nothing victorious in my answer, except to say medicine’s really, really, really hard. And if you go through your entire practice imagining you’re not gonna make a mistake, you just haven’t paid attention enough. There’s a bunch trailing behind you.
Lara: The mistakes that I’ve made over the course of my career, I’ve talked about publicly many times. So instead of talking about that, what I’d like to do though is, because I’m a professor, I don’t have a clinical practice. I’m not a clinician. So professionally often the mistakes that I make are just different in scope and scale. But what I would want to point out is that the extent of my errors or failures is long and illustrious.
There’s a scholar, think is at Princeton, who started a CV, a curriculum vitae, a living document of all of the things he tried and didn’t do, he wasn’t successful at. And that has inspired me. So I started putting my own together a few years ago. It is full of grants I applied for that I didn’t get. Papers that I thought were really fantastic and not a single reviewer agreed. It’s full of people, you know…things I tried to do in an effort to make things better and it didn’t work at all. And one of the things that has helped me is to think about learning from failure, is to recognize that the list of my failures is always longer than the list of my successes. But the lessons that I’ve learned are invaluable. Like the only way I’ve learned how to write a really good introduction to a grant is because I have failed so miserably before, so often, that I now have a bit of a better sense of what makes a compelling argument in a grant introduction.
Jason: What am I going to say about this? I actually have a really big problem with the premise of this whole topic, this idea of mistakes, because it’s just such a nonspecific word. It’s poorly defined or even medical error. I know there’s a definition. There’s several operationalized definitions of medical error. I think sometimes we make a diagnosis and someone else makes a different diagnosis. Sometimes they make a diagnosis and had we made a different one, the patient might have done better. I think that’s true. So in my career, there’s lots of times because we had metrics back, know, and other people have different diagnoses or people bounce back, get admitted. There’s endless times where people have made different diagnoses and I reflect on them all. Sometimes it’s because new information becomes available.
As an emergency physician like Jon, we are always making best possible plans on incomplete information. So that’s a really prevalent thing. Thank goodness, what’s tapering in my life so far, knock on wood, is there’s fewer and fewer things where I just had no idea about that topic. Like I had not seen that key feature before. I definitely have a mental list of the scary versions of those. You know, I’ve had patients come in with some horrible diagnosis that I had never seen and didn’t recognize the key features for. It may or may not have made a difference, but they do haunt me. So I do think about them a lot, me in the intellectual sense. So this topic bothers me because there’s so much pejorative language around it. It’s a barrier. gets in the way of us all doing better. And I think we need to be more precise. Sometimes we make a diagnosis that other people disagree with, blah, blah, blah, blah. Did it harm the patient? Did we learn from it? All those kinds of things. And I know that this is the paper’s about, so I’m setting you up.
Linda: So thanks you all. I’ll just add very briefly, I think I certainly have made mistakes, but my approach to having made a mistake has changed over time. I remember how defensive I was when somebody pointed out that I’d missed a low serum potassium that probably contributed to somebody’s arrhythmia. When I was a resident, couldn’t be me and did I deny deny. And now I want to know about it. I want to do something about it. I want to learn from it. So there probably is an evolution. Their title is Exploring Clinical Lessons Learned by Experienced Hospitalists from Diagnostic Errors and Successes, and it’s in a clinical journal. It’s in the Journal of General Internal Medicine in January 2024.
Jason: It’s a great journal.
Linda: Excellent journal, and we’ll maybe come back to that a little bit. These authors point out that there is such a thing as diagnostic excellence. And part of it is we need to learn from our errors and our successes. And they introduced me to a concept or an approach I hadn’t heard of called the safety one and safety two approach. Safety one being let’s identify the cause of the error and try and fix it and prevent harm. Safety two being let’s focus on the successes and we’re trying to repeat them as best practices for safety. The other point to make here is that in the US, many, many hospitalized patients are cared for by hospitalists who play a key role in not only the care, but in the diagnosis of patient safety. And hospitalists often are busy promoting patient safety as well.
So what we don’t know is the lessons that have been learned by experienced hospitalists from their errors and successes. So these authors say, since it’s not being described how hospitalists learn from their clinical experiences over time, specifically the clinical insights attained from errors and successes in patient care, we conducted this study to identify and characterize clinical lessons learned by seasoned hospitalists from diagnostic errors and successes. So, very briefly, I’ll go Jason, Lara, Jon, your thoughts on the premise, something that you’d like to follow up on?
Jason: So, I’m to be grouchy on this one. So, I’m going to reiterate, I have trouble with the language used. It’s pejorative. I’m not sure the logic of what they’re saying is they’re going to research mistakes, it’s in the title. And they talk about diagnostic error and they’re asking people how they learn from it. But we’re gonna get into some of their findings and I’m not sure that people are answering that question. So I’m worried about whether I’m gonna be grouchy through this whole episode. I like the premise, I like the idea of helping us all get smarter and talking about the lived experience for this topic, but I would prefer some more precise language.
Lara: So from my perspective, I’m fascinated by the literature that addresses mistakes, failures, I often frame them as surprises, things I hadn’t anticipated would happen. In fact, had short -tangent, had a great opportunity to do a lovely collaboration with Perspective of Medical Education once co -edited with Alisa Nagler. And we wanted to call it learning from mistakes and then cross out the word mistakes and write the word surprises. And we wanted to leave the word failure crossed out.
Can I just tell you how many hours of emails I spent with the publisher saying, don’t delete the word that’s been crossed out. Leave it crossed out. anyway, I’m really interested in the topic. I really am. And so I’m looking forward to this conversation. In terms of the premise, there was nothing in there that gave me cause for pause. There wasn’t anything that I was worried about. They make a bit of a comment about there is no other research on this topic. And I never think that’s a good justification for a study.
This topic is important to me, so I’m in.
Jonathan: So you all know that I do have a whole program of research around clinical reasoning and diagnostic error. I hate, just like Lara, any one that’s justification of, we are the first ever, and then they choose niche, niche, niche, condition, condition, condition. And the punch line is we’ve done this study. It’s called I Made a Mistake. And the senior author was [inaudible].
She did as part of her PhD. It was, I think, really nice study that in parallel would help inform what this study was trying to do. And we used narrative inquiry and built a meta story. And we had very different findings from this. And not that every study that I ever do needs to be referenced by someone else’s paper, but it’s interesting that when I go through and look at the reference list, the situation of their problem doesn’t become obvious to me in hindsight. It’s only after I kind of got down into the results, I was like, this feels very different than the other stuff I know. That I was like, I’m a little worried about it. Now, Laura Zwaan is a colleague. She and I have been part of a number of projects together. And so I respect her deeply. And so when you sent this paper along, I was like, I’m interested to read it. But I’m foreshadowing that I was a little surprised where they went or how they arrived at that place.
Linda: OK. So we’ve heard some concerns and yet it’s piqued the interest, I think, of most people here. So let’s dive into the methods very briefly. It’s a qualitative paper. They used an interpretivist brackets constructivist paradigm, which, and I quote from them, “holds reality as multiple and subjective related to how individuals understand and create their own meanings influenced by specific social contexts”. They did semi -structured interviews of hospitalists with more than five years of experience in six hospitals, three communities, three academic in the US Northeast. They found 91 eligible people. They invited 30 for interviews and 24 were eventually interviewed. They used a guide to understand participants’ perspective of lived experiences in this case with mistakes and to generate rich descriptive data. And if you look at the interview guide, it certainly has some questions that would do that. They well describe the interview guide and the piloting. Have the questions focused on diagnostic errors, sorry, on experiences and reasoning with challenging diagnoses and half focused on diagnostic errors.
So back to something that Jon said on a recent podcast, if you’re going to ask questions about a topic, don’t get distracted by asking questions about something else as well in the same interview. They also elaborated on the lessons learned from errors and successes, and that’s in fact what the whole point of this study was.
They did a reflexive thematic analysis. They described it well. And now I have a question for Lara before I ask the rest of you for your comments on the methods section. There is a sentence which says, to assess for data sufficiency, we relied on information power and judged the sample to be adequate to answer the research questions. So I asked myself, what the heck is information power? I know saturation is now out, but is theoretical sufficiency also gone out the door?
Lara: OK, so saturation is not out, theoretical sufficiency is not out. They’re just different concepts and often work in different paradigms. But let me focus in on information power, how it is, how to use it, those sorts of things. The concept comes from a 2015 paper by Malterud et al. And the paper is called Sample Size and Qualitative Interviews.
I’ll make sure that that manuscript, a site, a link to that paper is in our show notes. So problematically, they don’t actually give like a hard and fast definition in the manuscript. But the basic premise is that in qualitative research, there are no hard and fast rules about how you know you have enough data to answer your question. There’s no equivalent to a power calculation. So instead, you have to take into consideration several different factors as you design and conduct your study. Together, these different factors help you to understand if you have enough data to answer your question. So do you have enough power in your data, enough information power to answer your question?
There are five factors to consider in terms of information power, right? So I’ll give those five to you. One is study aim. A really broad question will need more data than a really narrow question. This makes sense. If you have a broad question you have a lot of space to cover if you have a much more narrow question, your phenomenon is smaller. Sample specificity. This is about the specificity of experience. The more specific the experience you’re studying, the smaller sample you will need. If you’re using established theory, that’s number three. If you have a theory you’re using to shape your study in your analysis, you will need less data than if there’s no theory guiding your work. Four is quality of dialogue.
If you’ve done interviews, you know this to be true. Some participants are really good at talking. They’re really eloquent. And if you have really eloquent participants who are able to describe and articulate their thoughts well, you’ll just need fewer participants. Finally is the analysis strategy. If you’re trying to explore a phenomenon across multiple cases, you need more participants than if you’re doing in-depth narrative analysis. So there’s a little picture, actually, in that paper, that we’ve recreated and that I’ll make sure is in the show notes. It’s totally worth a look. It is those five elements and it shows you the arrows and information. It’s really good. Even Jon would like this picture. So it’s in the show notes.
Linda: Even though there aren’t straight lines, there are curves in the picture.
Lara: High quality stuff.
[music]
Linda: So, Jon, Jason, thoughts on methods?
Jonathan: I think overall, I understand how they got there and I understand what they did. I appreciated Lara’s explanation for information power and I actually did a bit of reading around it and I would say I’m not sure if they had strong information power. They didn’t have a theory to inform it. And then the quality of the dialogue, I wonder about that.
If you look at their interview guide, this is my newest thing. I seem to be going to the supplementary material every time. A third of their questions are about their opinion, not their experience. So I want to, if you’re going to tell me a story about when you made a mistake and what you learned from it, that’s valuable. But if you say, what should everybody know? What should we do different in the system? How should we fix all this? That’s not your experience. That’s just, that’s just pontification without, not informed by your actual experience. And that kind of data, I don’t think adds to information power.
You might have lots of words, but I’m not sure it’s grounded in something that’s lived or experienced. And so just want to be cautious about that.
There’s a couple of things I thought were interesting. I’ve never seen this. They pilot tested their interview guide. What the heck is that?
Lara: I was going to have the same point. Things that make you go, huh?
Jonathan: mean, there’s no right answer with what you don’t want stationization with your interview. You want to make sure that the words make sense. You want to run them by some people, but you don’t want to say, hey, let’s go get some data and make sure that we have.
Jason: But they talked about ambiguities and so on. I thought that was okay.
Jonathan: It is funny. That’s fine.
Lara: no, that’s not the part that made me go, huh? Hold on. Maybe you’ll get there.
Jonathan: I’ll leave it for you. This is the last part I wanted to say. The reflexive statement is a non sequitur. I rarely talk with reflexive stuff because I think people are getting what this is. This is really a non sequitur. I don’t think they understood what they needed to say there. And I’m prepared to say it’s because of the journal.
When we get to the limitations section, they’re gonna do some other things for their non -sequiturs. And I wonder if the journal said, hey, we’re not comfortable with this methodological stance. So just so it all holds together and wow, this might be weird maybe the journal made them phrase it this way. They said stuff like, we didn’t do a sample size because we’re doing qualitative methodology, which is like a, yeah, okay. And we didn’t test any hypothesis. I was like, yeah, we got that.
Jason: You’re you’re doing qualitative methodology.
Jonathan: On the whole, their methodology made sense.
Linda: Lara, did you want to just go back and say where you were going huh before Jason?
Lara: Yeah, just at one point, there’s one point in the manuscript, there are several points in the manuscript that made me go huh, and I agree with you completely, Jon. I think it’s a factor of where they published in different expectations. Place where I went huh was the line that read.
So they’re talking about their interview guide and how they created it using literature. Great. And expert input from co -authors. Great. But then here’s where I went, quote, “we presented preliminary versions of the guide at multiple division meetings in general internal medicine, hospital medicine, our institute, and made changes per the feedback we received”. End of quote. Huh. I have never done that. Now, it never occurred to me to do that. And then I thought, well, that’s a good idea. But then I’m just.
Not sure how I think about it. And so it made me go, huh? And I wanted to ask you guys because I’m not sure.
Jason: I had scratched on that one, too. I thought it was kind of like in a month, I’m going to invite you to the interview and here’s all the 10 questions going to ask you. So you might want to think about some answers. no, no, we want to change that question. That’s kind of a weird dialogue.
Linda: It’s funny. I viewed it as they were presenting it to people who were going to be part of their of their cohort of people interviewed because you want to make sure that it’s understandable and not ambiguous and all of that. Anyway.
Jonathan: The questions are not ambiguous. If you go to the actual questions are, describe an important lesson you learned from an episode of Diagnostic Reasoning.
Linda: Yeah.
Jonathan: How do you like, how do you try to improve your diagnostic?
Lara: But they took it to a meeting. I think that’s kind of cool and at the same point, I’m also like, I’m in that meeting. I’m the person going, I got other things to do.
Jonathan: But it’s good on them for a culture where you can discuss research design and integrate into all the other things. So, let’s not beat up on them from that. It’s just a kind of a head scratcher. If you can integrate your scholarship into the administration and the finances and all that other things of everything you do, you may have a really good culture.
Jason: Is one of them the division head? Just checking, just asking.
[Laughs]
And now I’d like to tell you all about my research.
Linda: No, in fairness, okay, not all meetings are a day and meeting could be anything. They could have had the opportunity like many people to present their research in progress. It could be that kind of a meeting. Doesn’t have to be a finance.
Jason: Could be diagnostic errors anonymous who knows?
Linda: Jason, why don’t you go ahead with some brief comments about the methods?
Jason: There are lots of funnies in this. And like Jon, there’s all these funny lines that we could pull out. Funny in the sense of they’re kind of like don’t quite match like the stance. We should coin a new term like they’re qual apologies. Something like that. Like little things that say, we didn’t have a hypothesis. I noticed all those too.
My challenge with this paper is it just seems like it’s poorly, it’s wrongly packaged. Not poorly, wrongly. I thought the intro was all about mistakes and how we can all get better. But really what they’re asking is, what have you learned in your practice to be a better doctor?
And like Jon, I was really struck by most of the anecdotes that they printed are not about their personal experiences, like their personal choices as a clinician. They’re about team and system things and other teams things with a patient that they related to. Like, hey, I had a patient, they got admitted to ICU and they changed the med and then they came back to me and they weren’t doing so well. And that made me think, right, there’s one of them though. One of them was, we missed, one of the quotes was we missed CNS lymphoma or something like that. And that one was closer to what I thought the whole paper was going to be about. Here’s my experience. Here’s what I learned from that, because that has all of those lived experiences have value. I could bring that to senior residents and say, hey, save yourself 20 years of practice. All of these people were in a study and here’s their pearls. But instead, I didn’t take that away from these results or these methods. I thought these methods started one place and then started asking questions about another thing and then analyzed another way. They lost me in the logic and I really wanted what they proposed and asked you that.
Linda: All right. So, let’s hear what they actually found. First thing I’ll say is they actually had a very broad range of demographics for the people they interviewed, in terms of age, experience, race, practice, academic rank, you name it.
They had five very broad themes about lessons learned from successes and failures, let’s say. The first is excellence in clinical reasoning is a core skill. The important part of that to me is that that includes foundational skills. You can’t make a diagnosis unless you gather the data, and you can’t gather the data unless you do a history and physical and order the right test and have uncertainty and clinical humility. So that was theme number one.
Theme number two was it really helps to talk to patients and to your colleagues, particularly when you’re stuck. That includes for patients, knowing the patients beyond their actual illness and what they’re here for today, and respecting all your colleagues. When the nurse says the patient doesn’t look well, listen to the nurse.
The third is thinking about the diagnostic process, having it somewhere above your brainstem when you’re actually making a diagnosis. Thinking about your assumptions, revisiting things when they don’t make sense, don’t blow it off, slow down when you have to. Go to the primary data, I’m always telling my residents that. Fourth, adopting a growth mindset, commitment to growth, they call it.
Learning from them by following up. What happened to that patient who I sent to the ward yesterday? Did I make the right diagnosis? And if not, why not? And finally, and this one had a little bit of trouble with initially, it’s called prioritizing self -care. Wellness and activities outside of medicine are important. And then I realized what they really wanted was if you’re…
If you’re well in yourself, you’re much more likely to have improved performance. So, they then talk a little bit about how does this compare to what is actually known? The authors say the findings contribute to the literature on diagnosis education, which is a field involved in improving the diagnostic processes and promoting a cultural shift to the growth mindset. That sounds reasonable.
I didn’t know there was a specific field of that. And then they say two things that confuse me. First, they say the results are consistent with what has been reported in the literature on diagnosis education. And the second, this adds credibility and specificity to the dimensions of quality embraced by diagnostic excellence. So kind of what I veered is they’re saying, you know, this is consistent with what we know. And then they say…
Well, actually, no, it adds a bit of credibility to it. And I wasn’t quite sure what to make of that. They do conclude key lessons were learned when dealing with errors and successes in patient care by clinically experienced hospitalists. And they suggest that the findings could serve as a guide to developing priorities for helping clinicians continue to learn.
Thoughts on the results sections? Any issues with them? I’ll go Lara, Jon, Jason.
Lara: The point that I really want to make about the results is this, and I’m going to offer it as a tip to our listeners. And it truly is just a piece of advice. In a qualitative study, you should be able to read the results section without having to read the quotes and still understand the significance and the meaning of your themes. The quotes from participants are illustrations. They’re examples. But you can’t rely on them to tell the story, the importance, the interpretations. And this should also give you a sense of how much word count to dedicate to your quotes. If you write up your results section and it’s mostly participant quotes and very little explanation or description from you as an author, you’re likely relying too heavily on the participants’ words to explain your findings. unfortunately for me, this was a bit of a situation I found myself thinking on as I was reading the results.
So I just want to suggest that that’s a really, it’s one of the things I talk to my learners that take all the quotes out and if it still makes sense, you’re doing fine. If it doesn’t, then you’re relying on your quotes to do the work for you. The only other comment I want to make about those results is that one of the things that I found really interesting and I’ll be interested in what the others think. For me, what was interesting about these findings was how the place where I would have put emphasis is about how Diagnostic reasoning isn’t just within the mind, the knowledge, the skills, the attitudes of the individual physician. This study shows us that it is that to be sure, but it’s also about the team you collaborate with and the fullness of the quality of their life. so diagnostic reasoning then is not just that moment that you were making the diagnosis. It’s bigger, it’s broader, it’s more. For me, that’s the takeaway message from this paper.
Jonathan: The findings all make sense to me. I’m not surprised to see any of them. I do wonder the way that the questions were posed led to a story like I led with. I don’t see my story of where I didn’t sleep for two weeks. I really had nightmares for two weeks where that kind of story would get into this paper. And so the other paper that I was a part of, the senior author, let me just mention her again, is Kandasamy, has some similar findings. And the punchline there was, if you have support and you have a reflexive ability, it will lead to growth. But here they say things like, you should have a growth mindset and you should have a system of feedback and you should embrace mistakes as opportunities for learning. I just wonder if some of the if there was a bit of virtue signaling in their responses. And then the part about prioritizing self -care is exercise, sleep, healthy eating and meditation are believed to be important. And reading fiction can help you with diagnostic skills. It doesn’t speak to the information power, the richness, what’s coming there. All that to say is I agree with all of their findings. I think it’s all there. But then the part that kind of makes me wonder, did they get it, is I don’t see any of the other big author groups like [inaudible].
I don’t see our own paper. I don’t see it situated in the other literature. And when I go to the reference list, as I mentioned before, where is it feeding off of what is already known, what’s already been reported and already discovered around that? How does this move things forward? It almost feels like a bit of a lonely island. It’s connected to the other literatures, but not the ones that advances forward. And so I think there’s good stuff here, but it could be great if we could see how you connect or how you see the difference or how you see it builds. I don’t see that happening. And you talked about the results section, Lava. I’m talking about the discussion section now about, and the so what of it. Here’s our findings and now so what? That part didn’t get there for me. Okay.
Jason: I don’t want to pile on because I want Scott to still be my friend when I see him at conferences.
I read this paper with the lens of a frontline teacher who really cares about the next generation and wanting to save them time and I want to be that guy that gives them pearls and I was looking for this paper to be one of those sources that might help us. I didn’t find it. I’m really sorry. You know, there’s a logic here and I follow what the authors are telling us, but I didn’t find any of it actionable. If they’re telling me to be more well and help me make more diagnoses.
It’s just not something I can bring to my fellows and my most senior residents as they transition to practice. By contrast, know, our group every now and then holds this panel of docs of various vintages and they all talk about like hard lessons they learned that changed their practice. I wanted more of that and I didn’t find it here in this academic paper.
Linda: So I’m hearing the need for more specificity in terms of what Jason’s calling pearls, and I’m also hearing the so what of what’s actionable is not in here. They do say this should act as a guide, but they don’t exactly say how. I got to say what I got out of it, and maybe this is a couple of paper clips, is one of the paper clips is that we still have to teach, it goes back to the first theme that they had of…you have to have good diagnostic reasoning and you have to have good data for that. So we still have to teach and you still have to learn how to do a history, how to do a physical, how to select labs, that sort of thing. So that’s one of the big things I took out of it. The other thing I took out of it is this is a clinical journal. And two or three of you mentioned, well, maybe it’s because of the journal. But I think if I were a clinician reading this, you know, good on them. This is a qualitative education study, and I think it’s reasonably understandable for clinicians who are reading a clinical journal. Maybe that explains some of the apologies. Apologies, we don’t have a sample size. Apologies, we don’t have whatever else it was. We’re not producing testing hypothesis.
They may have been asked to put that in because it was this kind of a thing, kind of a journal, but frankly, I think it does make it understandable for a clinician. I think so, yeah. And good on them, as I always say, well, we have a clinical journal for publishing education stuff. Don’t forget to use my term, the qual apologies. I like that. Hashtag. All right. Let’s go to our assessment.
[music]
In terms of the methods, I’ll go Jason, Jon, Lara.
Jason: I’ll make the grouchy guy go first. Scott, remember we’re friends, I had trouble with the logic, not with what they did. I trouble a little bit about how things were analyzed and illustrated. So I’m to give this a three.
Jonathan: I’m going to give it a three as well. I think for me on the scale, three is this is representative of the state of the art. And I think it’s right, right down the middle.
Lara: The same for me. I’m giving it a three, no fatal flaws. Absolutely. It’s solid. It would be whatever, what I would expect.
Linda: You’re not going to add anything for the paradigmatic approach, which they have.
Jonathan: Come on now, stop, stop with this.
Linda: you usually do.
Lara: Linda, I am in so much, like honestly, hun, I’ve had so many painkillers. I’m just thrilled to be sitting upright still. So let’s keep going, friend.
Jonathan: We just need you to be high and then you’ll play by the nose.
[laughter]
Is that what I just discovered?
Lara: I will, right now, like there’s reasons my husband won’t give me pens to sign anything right now. Cause I’ll disagree to anything.
Linda: All right. And I’m actually going to give it a three, four methods. I thought they were very clearly explained. They’re sort of straight down the middle.
[music]
Linda: How about the usefulness, the education impact of this? We will reverse it so Lara can go to sleep more quickly. Lara, Jon, Jason.
Lara: That’s a harder one for me because I’m not sure. I’ll give it a three. This would be among what I would see most of the literature as a three.
Jonathan: Same for me. It’s a study that brings some of the same themes back into the literature and so supports in a different context. And so it adds to the transferability of these findings that we’re seeing from multiple studies.
Jason: Two for me, because I read it looking for those pearls and I there’s a few in there. I really I hope that somebody listening to this study listens to this podcast and says, you know what, I can do it and take care of all these flaws or concerns that these hosts had. think another study should be coming that would build off this conversation.
So it gets a two for me.
Linda: So I gave this one a four. So I guess we all even out of three. I thought there were some useful concepts in here, maybe because I’m an internist and I can sort of see where they’re going with some of this. I thought it was, it made me think and it confirmed to me some of the things that we have to do in terms of teaching and learning the basic data gathering. So.
I think we’ve got an almost down the middle three for methods, and if you take an average, it’s three for impact. So we’ll call it a down the middle solid paper here. All right.
And that’s it for the Papers Podcast for this week. I would like to remind you that we’re always interested in what you have to say. You can write to us at thepaperspodcast at gmail .com, or you can check out our website at paperspodcast .com. And I got those right.
Jason: Well done.
Linda: Yay.
Jason: Three in a row.
Jonathan: Well, it took us two years, just saying.
Jason: 15 years.
Linda: So from my perspective, see you next week. And bye -bye.
Lara: Talk to you later.
Jonathan: Thanks for listening.
Jason: Take care, everybody, especially Lara’s tooth.
Jason: you’ve been listening to the Papers Podcast, we hope we made you just slightly smarter. The podcast is a production of the Unit for Teaching and Learning at the Karolinska Institutet. The executive producer today was my friend, Teresa Sörö. The technical producer today was Samuel Lundberg. You can learn more about the Papers Podcast and contact us at www .thepaperspodcast .com. Thank you for listening everybody and thank you for all you do. Take care.
Acknowledgment
This transcript was generated using machine transcription technology, followed by manual editing for accuracy and clarity. While we strive for precision, there may be minor discrepancies between the spoken content and the text. We appreciate your understanding and encourage you to refer to the original podcast for the most accurate context.
Icon designed by Freepik (www.freepik.com)