Transcript for episode 67

This transcript is made by autogenerated text tool, and some manual editing by Papers Podcast team. Read more under “Acknowledgment”.

Jason Frank, Lara Varpio, Linda Snell, Jonathan Sherbino.

Start

[music]

Linda Snell: Welcome back to the Papers Podcast Linda here and for those of you who listen regularly thank you for coming for those of you who don’t listen regularly you should be because we you’re first here and we’re yelling at you Get on.

Lara Varpio: You got a word of welcome. Nice to have you. I still welcome back.

Jonathan Sherbino: Get your feet off the carpet.

Linda Snell: I was going to say, you should be listening regularly because there’s lots of people, of the hosts on this podcast, who say really important things about Medical Education. Like Lara, for instance, who’s going to tell us about her paper.

Lara Varpio: Hi, everybody. So glad to be here today. And thank you, Linda, for the warm welcome. Jon, how are you doing, hon?

Jonathan Sherbino: I’m doing great. I’m fully caffeinated. I have my Father’s Day owl mug rolling.

Jason Frank: We should describe that for audio listeners.

Lara Varpio: It’s an owl.

Linda Snell: It’s a pink owl.

Jonathan Sherbino: It’s an owl. It was painted by a small child at some point.

Jason Frank: It’s a pink owl.

Jonathan Sherbino: My favorite color. I’m sure my children secretly put lead paint in it because they look at me and say, keep drinking. Keep drinking from the cup. Not long before this is all ours.

Lara Varpio: And how are you doing, Jason?

Jason Frank: I’m great.

Lara Varpio: Sweet. And Linda, you are having a good day in Montreal. The sun’s shining, birds are singing.

Linda Snell: Excellent. Warm, sunny. I can hardly wait to get outside. So get on with this, will you, Laura?

Lara Varpio: Let’s do it. So friends, I picked this paper for a reason that’s really simple. I know for our regular listeners, it’s going to come as a shock to hear that perhaps my technological skills are not as high as you might imagine.

Lara Varpio: And so when I see a paper, that summarizes some literature about things that I should know from the tech side of the house of Medical Education. I’m on that like white on rice. So I have to be honest, though, like I’ve embraced chat GPT. I know how to use it. I can do that.

Lara Varpio: I am beholden to my navigational apps like Waze and those things, because I literally have no sense of direction. So I need that. And as a researcher, friends, if you haven’t played with Scopus AI. You are missing out. You got to get out on Intascopus. It has an AI function. It’ll change your life.

Lara Varpio: So when I saw this manuscript, I’m like, yes, I need this. I appreciate AI, but like there’s a bunch of Papers in the field now and I need somebody to help make sense of it. Ergo, I picked this paper. The paper is entitled A Scoping Review of Artificial Intelligence in Medical Education. It’s a BEME review published just this year, 2024.

Lara Varpio: Now, before we get into the meat of the paper. I’m curious about how all of you have adapted AI into your Medical Education practices. And I’m okay with the fact that maybe you haven’t. But have you seen your learners and colleagues using it in interesting ways? What has surprised you about AI to date?

Lara Varpio: Jon, I’m going to make you go last because I’m pretty sure it’s possible. Like if it was possible to literally mainline the technology into your veins, you’d be like, move over seven out of nine. I’m plugging into the Borg. So I’m going to make you go last. Let’s go Jason, Linda, Jon.

Jason Frank: I’m glad you asked me first because I have a quick question for you. Why is this a Beanie review? It’s a scoping review, which is kind of like, you know, here’s a picture of the landscape. But it doesn’t strike to me like it should be a Beanie review, which is best evidence available. What’s your take on that?

Lara Varpio: So my take on that is, and I’ll be really interested in what the rest of y’all think, because my understanding is that BEME is an organization that’s A…

Lara Varpio: Don’t ask me why I feel it’s affiliated with AMEE or the specific way in which it’s affiliated because I don’t know the specifics there. But I believe it’s affiliated with AMEE. And the idea behind this BEME group organization is that they help to make sure that knowledge syntheses that are done in Medical Education are done in a robust kind of way.

Lara Varpio: So if you do a scoping review or a systematic review, you can work with or apply to Be Me to have somebody look over your materials, over your context, over your… Your processes to make sure that what you generate actually is best evidence. That’s my understanding.

Jason Frank: It’s like a stamp from a group as opposed to this is best evidence. It’s just a name.

Lara Varpio: Maybe. That’s your take on it? I don’t know. I don’t know.

Jason Frank: Okay. So back to your question, AI, I am still a dabbler. My kids are in this house in their 20s. AI automates a lot of things. I’ve started to automate some things, not as much as Jon. And I got to tell you though, when I’ve used AI as… In part of searches get a lot of hallucinations so that kind of turns me off but i’m working on.

Linda Snell: I use the navigation aids and I use ChatGPT for myself, but I’d like to talk about how my students use AI, usually ChatGPT and usually when they’re writing a paper. And often, but not always, when they’re writing in a language that’s not their first language.

Linda Snell: And sometimes what comes out hasn’t been checked. And just the same as Jason gets hallucinations. Students often get hallucinations and they don’t check or don’t know to check what’s coming out. And so sometimes you get really weird things in their assignments.

Jonathan Sherbino: All right. I got lots of things to say. First, I love, love, love the Star Trek reference. Thank you very much.

Lara Varpio: Buddy! Totes!

Jonathan Sherbino: There we go.

Lara Varpio: Oh yeah, live long and prosper.

Jonathan Sherbino: So for those of you who think that there’s hallucinations, you’re using old generation. Those hallucinations are going way fast. And you’re… There’s a lag in your own human brain with what’s actually happening. In terms of search, I’m going to direct you to Illicit. Have you tried that, Lara?

Lara Varpio: No, I don’t know Illicit.

Jonathan Sherbino: I don’t have any stakes in the company, so this is not A…

Lara Varpio: It does sound polite, though. Can we talk about it on the podcast?

Jonathan Sherbino: It makes it even more exciting. Okay. But I would love…

Jason Frank: Is it Explicit or Illicit?

Lara Varpio: I heard Explicit, so…

Jonathan Sherbino: It does great work summarizing and searching for you in terms of technical Papers. My new writing hack… Is I will dictate into a document and I get a flow of consciousness and I get past writer’s block and I do all that and then I dump it into an LLM and it cleans it for me and it comes back in workable text prose.

Jonathan Sherbino: If you’ve ever listened to a transcription of a conversation, you’re like, this is just junk when you read it. But if you can just talk your way to the first draft and then have Gen AI clean it and then you actually have a workable first draft, that’s really exciting in terms of just workflow.

Jonathan Sherbino: But the last part is actually I’m founding a startup that’s using…

Lara Varpio: Are you really?

Lara Varpio: Count me in. I don’t know what we’re doing, but I’ll do it.

Jason Frank: Are you surprised?

Jonathan Sherbino: I have an NDA you’ll need to sign. Actually, I have an NDA for all the listeners they have to sign before I can tell you more. But I think this is an interesting way to advance teaching with a transformation of the type of teaching we’re doing rather than just an augmentation.

Jonathan Sherbino: We’re going to talk about transformation and augmentation later in this podcast, because I think there’s a really… Great framework for adjudicating new educational technologies. So yes, it’s here. Welcome Skynet. Take me up. I’m ready. I’m ready. I’m ready for the cable to the back of my head.

Lara Varpio: Beam me up, Scotty. All right. So this paper, as I said, is a BEME scoping review. And I just want to say that the author list is long. Many people you know are on the author list. First author is Morris Gordon.

Lara Varpio: And the goal of the research, and I’ll just read it for you, This review aimed to map the literature regarding AI applications in Medical Education, core areas of findings, potential candidates for formal systematic review, and gaps in the literature for future research. Fine.

Lara Varpio: So let’s dive into the methods. The authors conducted a scoping review with a twist. In the abstract, they call it a rapid scoping review. In the first line of the method section, they say the scoping review was conducted in rapid time frame.

Lara Varpio: So this led me to a lovely afternoon of… Reading about methodologies of rapid reviews, because I don’t know much about rapid reviews and rapid scoping reviews more precisely, let me share with you just a little snippet of what I learned. From what I can find, a rapid scoping review is really quite similar to a scoping review, but there are a few key differences.

Lara Varpio: One is that the project timeline is short. So if a scoping review is expected to take a year or so, maybe more to do, then a rapid scoping review is closer to four months. If scoping reviews have several broad research questions, a rapid scoping review is going to limit that scope.

Lara Varpio: And if scoping reviews have data extraction approaches that focus on depth and generation of new knowledge, then a rapid scoping reviews, those questions are more tailored in the data extraction and trying to meet a very specific aim, probably a narrow one.

Lara Varpio: So this study was conducted within 16 weeks, like wow, of its inception, 16 weeks from the day we started to the day we’re done. So that’s pretty rapid. For sure. They followed Arxie and O’s methodology. Fine. They chose not to do Levesque’s recommendation for external stakeholders because they felt they had sufficient expertise in the author team.

Lara Varpio: Here, comment previously. Look at the author list. You’re going to know these people. I’m okay with that. They searched PubMed, Medline, Embase, and MedEd Publish. Now, for the record, I did think that was a bit odd because PubMed and Embase, that’s fine. They have good databases. But…

Lara Varpio: Maybe not as many as we’d expect, but it’s a rapid review find. But MedEd Publish, I’d be interested in what you thought about that, because MedEd Publish is a post-production review platform, right? So… MedEd Publish? Which one’s MedEd Publish?

Jason Frank: This is the AMEE alternative open access platform.

Jason Frank: No. Right? It’s like, what’s the one in AAMC runs? It’s a repository of educational objects. Come on, guys, help me here.

Linda Snell: MedEd Portal?

Jason Frank: MedEd Portal. I think of it like the MedEd Portal.

Lara Varpio: But MedEd Portal is different than MedEd Publish, right? Because MedEd Publish is the post-production review, right?

Lara Varpio: No. Jon, I’m looking at you.

Jason Frank: It’s an alternative platform.

Linda Snell: So are these peer reviewed? What’s on there? Yes.

Jason Frank: Yes.

Lara Varpio: Okay. We’ll discuss this more.

Jason Frank: Readers can help us out, but go ahead and tell me everybody that I’m right.

Lara Varpio: No, I’m not going to go with that. What I’m going to do is I’m watching my friend, Jonn, go online and sort this out right now because he’s not going to let us have this hanging.

Jason Frank: Jon gets all the cred.

Lara Varpio: What the hell? Because he’s good. No, he is. But I mean, from a technology side. Look at us.

Jason Frank: It’s not a tech issue.

Lara Varpio: Well, I’m just saying he’s recording and he’s doing a search. I’m not sure the other three of us can do two things at the same time. Moving on.

Jonathan Sherbino: All right. So here it is. MedEd Portal. You put an innovation online and you actually share the innovation. You share the curriculum. You share the tool. You share the cases. And that’s funded by AAMC.

Lara Varpio: And that’s that. Which one is that one?

Jonathan Sherbino: MedEd Portal. Portal. MedEd Publish.

Lara Varpio: Yes.

Jonathan Sherbino: Is.

Lara Varpio: AMEE.

Jonathan Sherbino: Rapid. Funded by AMEE. It’s a rapid, transparent publishing site that does have peer review. It would look much more.

Lara Varpio: But peer review is after publication, isn’t it?

Jason Frank: No.

Jonathan Sherbino: So it’s exactly like Curious, which is an open access resource. So sadly, I’m going to have to agree with you on this one.

Lara Varpio: All right. Come on. Anyway, I feel you may have had a partial point. But anyway, but my point here is that I just, I found myself having a whole moment of, okay, database, database, and what? But anyway, that’s fine. Let’s talk about it. I’d be open to your conversations and comments about that selection.

Jonathan Sherbino: Now, hang on here.

Jonathan Sherbino: Peer review of articles takes place after publication. The articles published.

Lara Varpio: Dork. Win.

Jonathan Sherbino: I wasn’t going to let Jason win. Don’t worry. I got you. I love you.

Lara Varpio: I love you.

Jason Frank: So wrong.

Jonathan Sherbino: But you have to. You have to get an expert reviewer and you invite them to review it. And then they put their review online beside your paper.

Lara Varpio: Okay. So I post my paper. They post it without review. I find a friend and say, could you please go say nice things about my paper on the website? And then they do.

Jason Frank: Yeah. It’s just like Curious.

Jonathan Sherbino: It’s exactly like Curious. What are we talking about again on this podcast?

Lara Varpio: I don’t know. I scared my dog when I yelled, so she ran away.

Jason Frank: We’re deep in rabbit holes. We need a rabbit sound.

Lara Varpio: Back to the authors and this literature review.

Lara Varpio: They did a hand search of identified articles and added manuscripts that were cited that they deemed relevant. They included pretty much everything published that addressed AI and med ed that I would have, of my collection of articles, all of them were in there. So, yeah.

Lara Varpio: Excluding only articles that dealt with AI for diagnostic purposes, clinical or organizational purposes, or that talked about AI only for like a research purpose. They divided the corpus into two for data extraction. One group. Consisted of all the Papers that were research reports or innovations. The second consisted of perspectives and opinion Papers.

Lara Varpio: For the research and innovation Papers, they developed and then used an extraction tool that collected information about study demographics, how the AI was used, about the kind of AI used, the rationale for it, etc.

Lara Varpio: For the perspective Papers, they use the same data extraction tool, but also added details about the rationality, the rationale for using AI, its application of the framework, and recommended topics for curriculum and research in this edge. So while we’ve had a robust discussion about all kinds of things that are not relevant, I would be very interested in your thoughts on the methods. Let’s go, Linda, Jason, Jon.

Linda Snell: So I love the fact that they had sort of a double-pronged thing. If you look at most lit reviews, they exclude opinion Papers and perspectives and things like that. And I think for this particular research question, it was really important to have those in there. And for this… This type of a scoping review. So good on you guys for having that two-pronged part on innovation and part on perspectives.

Jason Frank: You know what? If you accept that it’s a rapid, rapid review, it’s right on the money. And they also get extra marks from me for the nice colorful diagrams. Even Jon would like these infographics, and he’s pretty critical.

Jonathan Sherbino: This is an exemplary rapid scoping review. They follow the Prisma extension. They have a large author group, which you might wonder, why do they have 16 authors on this paper? It’s because they have a lot of work to do. They have to do it quickly, but they also need to synthesize the literature. They wrap their arms around a big corpus of studies and editorials and commentaries.

Jonathan Sherbino: Their search ends August of 23. They submit December 23. They’re accepted January 24. And they’re published in a volume April 24. No wonder they have 16 authors. I have… No concerns about any methodologies. I’ll pay attention to their results because I don’t think there’s a critical or even a minor flaw here.

Lara Varpio: Knowing that they had 16 authors and listening to that timeline, do you know what boggles my mind? I have a hard time getting all like shorter author lines, like fewer people. I have a hard time getting all those ducks in a row for the paper. Like they went from finishing the review to writing the paper in a really short time.

Lara Varpio: So to the lead author, congratulations on the leadership skills because that’s legit. Okay. Moving on to results. How many Papers did they end up with? Brace it. Brace for it. Hold on tight for 278. Shoot me now. That’s a lot. And now we remember why they had 16 authors. But, you know, respect. They did this in 16 weeks.

Lara Varpio: They say a full appendix of extracted data for the Papers had been uploaded in a repository. While the first paper about AI published starting in 1998, the real surge of Papers came around 2018. If there were 11 Papers published in 2018, that number was up to 57 in 2022. And it was at 114 as of August 23.

Lara Varpio: So yeah, it’s a hot topic. About 70% of Papers were research and innovation. The other 30 perspectives and opinions. The focus of about 50% of the corpus was undergraduate Medical Education. Leaving 22% for GME and only three for CPD. And we’ve already thought about how this is good. Go ahead, Linda, you want to say something?

Linda Snell: I was really, in a way, disappointed that only 3% was CPD. I wasn’t surprised, because I think that that’s where we need to focus our attention when it comes to being efficient in education for people in practice.

Lara Varpio: Yeah.

Jason Frank: I wonder if that’s just proportional to the whole MedEd literature. There’s fewer Papers about CPD.

Lara Varpio: Fact. Okay. But the Papers were from 24 different clinical specialties, basic science departments. So, you know, long story short of the demographics, everybody’s talking about it. Everybody is in on this conversation. So when you get to the results and you see like the technical term is a bucket ton of different segments for the results. That’s why.

Lara Varpio: Because there’s 278 articles, they found many different ways of mapping the literature. So instead of me reading all of that to you, what we’ve done amongst the four of us is we did some homework and we all picked a section. The part that I thought was really cool was about the ethics. So my interest was piqued by those 14 Papers in the corpus that addressed ethics.

Lara Varpio: They talk about a caution of the limitations of AI applications in Medical Education and about the need for educators to really grapple with the ethics of AI. When, let’s be honest, most of us are still trying to figure out how to use it, what it is, what we do. It’s a real moving target that we’re actively in the midst of developing.

Lara Varpio: So ways of using AI are constantly being developed. And so we need to be constantly reflecting on the questions, the ethical questions that those evolutions and developments bring with them. The paper list offers some topics that should be covered about AI ethics. Some of the ones I thought were particularly important are.

Lara Varpio: Algorithmic bias and ethics or equity, right? So what data is in there makes a big difference on what comes out, what’s generated by the AI. And we know that the knowledge going into the AI is there’s implicit bias, there’s structural inequity embedded in that. So we really do need to be worrying about that.

Lara Varpio: The second one I just want to highlight is about transparency and informed consent. Physicians have a duty to inform their patients about how AI is being used in their treatment and care planning and about how the data is being collected and used. Their anonymity is not ironclad, right? With more and more AI resources being developed, it’s getting increasingly possible for people to be identified across platforms.

Lara Varpio: So we really got to be thinking and asking hard questions about what data am I generating? How am I sharing it? And I also have to ask questions about the AI that’S… The generation that the product that’s generated by AI, is it really an equitable, just product? Or do I have to ask some questions here?

Lara Varpio: So with that, that’s my view on the ethics piece. I’m interested in what you’re thinking. So let’s go, Jon, Linda, Jason.

Jonathan Sherbino: First off, in terms of the scoping review, the results are typical of a scoping review. At the start of the review, there’s very little. And right at the end of the search, there’s a lot. Because you choose a topic that is timely and of interest, and it builds over time.

Jonathan Sherbino: Nobody’s saying, hey, I wonder what the lecture of the professor in an operating theater looks like, because that’s what we did 200 years ago. So that kind of feels the same. We also see this similar typical… Demographics around geography. There’s over-representation of the global north versus the south. That probably speaks to constraints and resources.

Jonathan Sherbino: And then the last part is we don’t see continuing professional development really included in this conversation. Our education scholarship is heavily focused on UGME and then subsequently on PGME. And we forget where I think AI might have the biggest opportunity. It’s the cultivation. It’s the just-in-time.

Jonathan Sherbino: It’s a very tailored distribution to a busy clinician of practice. That’s an interesting gap for me. Now, I want to talk about assessment. And as I kind of alluded to before, I want to talk about this SAMR ed tech model, which has been used in a number of, I’m not sure if we’ve talked about it on the Papers Podcast before, but it’s a fairly well known model and it stands for substitution.

Jonathan Sherbino: So how can a technology replace a traditional method? Augmentation. How can a technology bring a new functionality to an existing process? Modification. How does it redesign or improve a task and redefinition? Does it bring something completely new into the conversation?

Jonathan Sherbino: And I think when you use this type of framework, it helps you say what’s happening with technology. It’s not all apples. It’s more fruit salad. It’s tweaks or it’s complete change or it’s replacement. There are computer overlords have made us redundant as assessors, which might be a cheer from some of the people on the podcast.

Jonathan Sherbino: Let me give you some examples, but there’s a richness of it. It seems that the literature is very heavily at the substitution. We take out menial tasks using AI for assessment or augmentation. We take a task, but we make it a little bit more streamlined. And I’ll give you some examples. So at substitution, there’s a whole bunch of Papers around multiple choice question generation.

Jonathan Sherbino: At augmentation, there is the… Video analysis of psychomotor skills. So can AI see how you’re doing and produce a parallel score for a human assessor? Or can human assessors provide narrative feedback and then the AI actually scrape that data, analyze it, and provide summation of all of the narrative scores for an individual? And so that’s an example of one of the studies that we perform.

Jonathan Sherbino: So can you take… 20,000 words of narrative assessment over the course of residency training and say, here’s the trajectory, here’s the arc. At the modification level, can you take a task and redesign it in a significant way? So one example is the development of a virtual OSCE, which also has online grading. And so we take something and we transform it in a way.

Jonathan Sherbino: And then at the redesign or the redefinition or the redesign level is, can we predict? Medical students’success on high-stakes exams when we dump all of our assessment data into it and produce true learning curves or true future performance models. That was the holy grail of programmatic assessment 10 years ago. And maybe with AI, we might be able to get through the noise to actually see some data.

Jonathan Sherbino: So I think, and I’m a big positivist for AI. I think there’s lots of opportunity here to take our role of assessment and substitute some of the meaningless work that we do, augment, and so we get better outputs.

Jonathan Sherbino: Modify, meaning we change our current models, but turn it into something that gives us new ways of thinking or completely redefine how we’re going to assess. So I’m pretty bullish on where we’re going with AI and assessment.

Jonathan Sherbino: P. S. Love the graphics in this study. I don’t say that often. So PS, love the graphics.

Jonathan Sherbino: There are a lot of clip art, but they did it in a good way.

Linda Snell: Yeah, they got lots of primary colors, Jon. That’s good for you to play with. Get your crayons out. I picked the section on admissions and selection and the use of AI there. And I first started thinking, what are some of the challenges in admissions and selection? And there’s a couple that come to mind. One is bias.

Linda Snell: The second is, oh, I’ve got a huge number of individuals to look at, the large numbers. The third is, am I making the right decisions? In other words, the prediction. And the fourth, which is a very practical one, is everybody’s asking me the same questions when they come for the interviews, whatever they’re coming for.

Linda Snell: Can I deal with that to make things more efficient? So when you look at some of the AI Papers, first, the AI has been used to host a question and answer session. I presume this was for candidates who are all asking the same questions. And so that’s obviously very efficient. Second.

Linda Snell: AI used to detect a gender bias in letters of reference. And in fact, there was a gender bias in letters of reference. Third, AI used to rank medical student evaluations and comparing that with faculty evaluations. And in this case, AI failed. It didn’t pick up the nuances and the subtleties. Fourth, as a predictive model, who was ranked and matched? And… The success in that was very high. AI could be very helpful there.

Linda Snell: But most of the studies had to do with screening applications when you have large numbers of applicants. And AI was very successful for that. It also identified applicants who might otherwise have fallen through the cracks, who might otherwise have not been chosen because of a bias.

Linda Snell: So bottom line is, I think if you think about… The challenges you have in a particular area and how AI can help. In general, AI can help with admissions and selections.

Jason Frank: Cool. I’m also very enthusiastic about this paper. When I first looked at it, I thought, oh, this is going to be a giant list, a long smorgasbord of every paper. But actually, it’s really good. It’s really accessible. It’s going to be highly cited.

Jason Frank: Let me highlight one area which I think has potential. So all of you. I know Jon’s scene. All of you have watched the reboot of Star Trek that came out a few years ago. And there’s a scene.

Jason Frank: Jon’s doing like Peace and Prosper.

Jason Frank: There’s a scene where a young Spock in this reboot is in this pit. Yes. And there’s an AI interacting with Spock, teaching, correcting, giving more challenging questions. Kind of like it’s responsive to his right answers and so on. So isn’t that cool? That’s a section in this paper that.

Jason Frank: You can teach clinical reasoning using AI that is tailored, can generate a whole bunch of a library of clinical cases with key features and help a given learner recognize key features, get more accurate, be even smarter. I think that’s got a lot of potential and it’s probably the future.

Lara Varpio: So just for the record, listeners, Star Trek isn’t in the paper. It’s just like a common trend apparently throughout our conversation today. So don’t be disappointed when you don’t have a figure of Spock in the hall with the. Little teacher outside, because there’s lots of other figures in the paper that are excellent.

Lara Varpio: So I’m just going to end with a few words about the discussion. Two things specifically. One, the paper ends by offering what they call the facets framework. And the facets framework is all about trying to make sure that all the Papers that continue in the future cover, they’re comprehensive.

Lara Varpio: They cover all the aspects of the AI that we want to know about so that we can do cross comparisons as this literature grows and develop. Greater insights, making synthesis probably easier. So you can kind of imagine how the facets framework came into mind. It’s not because their job was easy, right? It’s because Papers were missing important pieces.

Lara Varpio: I encourage the listeners to go and look at that framework. And if you’re going to write about an AI innovation, I think it’s probably a good idea to look at the facets framework just to say, did I address and mention and report all of these things? Go ahead. Sherbino has got another interruption.

Jonathan Sherbino: I just wanted to endorse what you’re saying. We’re at the… Period where everyone has something new and shiny. But at some point, we need to mature this literature. And so Dave Cook would remind us that comparative effectiveness research is really important.

Jonathan Sherbino: So you have to say, and using that framework, the facets framework allows you to tag or to categorize what you’re actually doing so that the next iteration is not just a new thing.

Jonathan Sherbino: And that we don’t just keep seeing shiny baubles everywhere, but we try to advance MCQ generation forward in a way. We try to advance video OSCEs forward in a way, and you can’t compare it just on the superficial features. You need this framework to describe it.

Lara Varpio: Linda?

Linda Snell: I agree. When I first looked at the framework.

Lara Varpio: I pooh-poohed it.

Linda Snell: I said, ah, this is very generic. It’s not going to help. And then I realized it has to be generic because it’s addressing the broad field of AI. And so, yeah, I think it’ll be helpful.

Jason Frank: But couldn’t it be used, honestly, for any innovation? You just adapt the wording a little bit, so it’s not about AI? It seems really useful actually that way.

Lara Varpio: We have had a great conversation. I’m excited to get to our voting because we’re all going to agree it’s a five across the board on both items. So let’s start with methods and we will go Jason, Linda, Jon.

Jason Frank: Methods were good. Four.

Lara Varpio: Linda?

Linda Snell: They’re good, but more than good. Five.

Lara Varpio: Jon?

Jonathan Sherbino: Can I just say that for the first time in forever, you forced us to use an ordinal scale and I’m so happy. And there’s going to be no…

Lara Varpio: I’m going to give it a 4.5. That just to drive you.

Jonathan Sherbino: It’s a five.

Lara Varpio: I’m going to give it a four and a Spock. Boom shakalaka.

Jonathan Sherbino: Oh man. Why did I just push your buttons? It’s a five. This is great methodology.

Lara Varpio: And now two results. One out of one to five. Let’s go inverse. Jon, Linda, Jason.

Jonathan Sherbino: Mine’s a five with an asterisk. And the asterisk is.

Lara Varpio: Oh, and you’re giving me a hard time for not doing ordinal whatever.

Jonathan Sherbino: This is going to stale date very quickly. Yeah. So right now, this is state of the art. This changes everything. But read this paper in two years. You’re like, yeah, I know all this. And so read it now and tell all your friends now.

Jonathan Sherbino: But don’t put it in your reading list to get to it in a future date because then you’re like, oh, this is just a treat. Or this is what it looked like 18 months ago. This is Moore’s Law happening right now.

Linda Snell: Yeah, I agree it’s a five now, but I wouldn’t give it two years, Jon. I’d give it six months before it’s stale dated.

Lara Varpio: Yeah, lots of Papers coming out. Jason?

Jason Frank: I think it’s a five because people will refer to this for the next year or so, and you can reference it to launch any AI paper, and facets is useful.

Lara Varpio: And I, too, am going to give it a five for all the reasons mentioned. And so with that, friends, and just because Jon was so excited I gave it a five, I’m going to give it a five and an asterisk. And I’m not going to tell you what the asterisk is for, and that’s going to drive you nuts.

Lara Varpio: So friends, thank you for listening to yet another recording of the Papers Podcast. We’re so glad you’re here. And because we want to hear from you, we are going to ask you to get in touch with us. And because it’s a skill that is like a Vin Diesel level skill, Jon’s going to tell you how to get in touch.

Jonathan Sherbino: You couldn’t remember, could you?

Lara Varpio: No, I couldn’t remember.

Jonathan Sherbino: You can reach us on our website, paperspodcast.com. Or if you’re Vin Diesel and you still use the email. And you’re fast and furious with your typing on email. You can reach us at thepaperspodcast at gmail.com.

Jonathan Sherbino: I did want to give a shout out to one of our listeners. I think a new listener to the Papers Podcast, Suraj Mithawati. Suraj is a hematologist here at McMaster. And he wrote us a really nice note. We covered one of his Papers. I can’t remember which episode it was. It was a paper on practice variability. And his comment is, it’s always been one of my nerd goals to have a paper featured on the podcast.

Jonathan Sherbino: Love the discussion and learned a lot from it. I’m really grateful for that kind of comment when we can have a value add to the authors, let alone the listener. So thanks, Saraj, for the note. Thanks for also taking my calls in the middle of the night when I’m saying, the CDC looks all crazy. What do you think it means? Thanks for listening.

Lara Varpio: Talk to you later.

Jason Frank: Take care, everybody.

Linda Snell: Bye-bye.

Jason Frank: You’ve been listening to The Papers Podcast. We hope we made you just slightly smarter.

Jason Frank: Podcast is a production of the Unit For Teaching And Learning at the Karolinska Institute.

Jason Frank: The executive producer today was my friend, Teresa Sörö.

Jason Frank: The technical producer today was Samuel Lundberg. You can learn more about the Papers Podcast and contact us at www.thepaperspodcast.com. Thank you for listening, everybody. Thank you for all you do. Take care.

Radio microphone and paper with text.

Acknowledgment

This transcript was generated using machine transcription technology, followed by manual editing for accuracy and clarity. While we strive for precision, there may be minor discrepancies between the spoken content and the text. We appreciate your understanding and encourage you to refer to the original podcast for the most accurate context.


Icon designed by Freepik (www.freepik.com)