#58 – The Great Debate: Education’s  ROI on Patient Health

Episode host – Jonathan Sherbino

Dr Jonathan Sherbino, portrait.
Photo: Erik Cronberg

Can the impact on education be measured on patient outcomes? In this episode the hosts take the hotly debate with a recent study published in Jama to their help. Tune in for a discussion that challenges both your workload memory and examines the real-world implications of education in healthcare.

Episode article

Gray, B. M., Vandergrift, J. L., Stevens, J. P., Lipner, R. S., McDonald, F. S., & Landon, B. E. (2024). Associations of Internal Medicine Residency Milestone Ratings and Certification Examination Scores With Patient Outcomes. JAMA.

Episode notes

Background

Buckle up.  This is gonna be controversial.  We’ve said it here before.  The link between education and patient outcomes is a tenous (at best) connection.  There are so many confounding elements that some educators – looking at you Dan Schumacher – have advocated for clinician/resident-sensitive quality measures.  Essentially, what are clinical markers of care that can actually be influenced – directly – by the practice (and presumably the associated educational intervention) of a resident.  Yet, the other side of the house of medicine is asking – (louder and louder) if we invest so much in education , dollars diverted from clinical care for innovation and scholarship – where is return-on-investment for patient care.  This side of the house asks an “engineering pipeline”-type question.

Asche (2014) and Chen (2014) have separately conducted programs of research that demonstrate that where you trained impacts the quality of care you deliver once in practice.  But, the analysis is at the level of the training program.  The connection between an individual’s in-training assessment and performance in practice has been most famously shown by Tamblyn.  (See Tamblyn, 2002.)  And the association between knowledge entrance exams and graduation from training has been established in numerous studies of undergraduate and professional training programs.  But these associations are based on knowledge exams. Critics counter that this association is simply predicts ability to complete knowledge tests: a test-taking competency.  (See for example the controversy around college entrance exams and sociocultural disadvantage.)

The movement to competency-based education and associated assessments of performance of cognitive, technical and affective skills was intended to advance this debate.  So, will competency-based education, and the associated competency-based assessments, lead to better patient outcomes? 

Purpose

To examine the association between physicians’ milestone ratings and certification examination scores and hospital outcomes for their patients

(Gray et al., 2024)

Methods

This is a retrospective cross-sectional database analysis of United States hospitalist admissions:

  • 3rd year IM residents (2016-18) who graduated and cared for (within 3 days of admission),
  • Medicare patients > 65yo  (nonelective, nonhospice) who were hospitalized  (2017-19),
  • For 25 common diagnoses in
  • Moderate or greater-sized (>100 bed) hospitals.

A multivariate regression analysis was performed using:

  • Primary clinical outcomes of mortality or readmission at one week.
  • Mean milestone ratings of 22 sub-competencies across 6 categories (patient care, medical knowledge, practice-based learning and improvement, systems-based practice, interpersonal and communication skills, and professionalism) on a 9-point scale, where < 7 = low and >=8 equals high based on range restriction of the data.
  • Knowledge test scores, on national certification exams  (first attempt), analyzed by quartile performance.

Results/Findings

Nearly 7k practicing hospitalists caring for more than 455 k patients in nearly 2k hospitals were included.

25% of admissions were by a physician in the low competency rating category (8% high); 90% of admissions were by a physician who passed the knowledge exam on the first attempt.

There was no significant associations between top vs. bottom competency ratings (overall or for medical knowledge competencies) and any hospital outcome measure.

**Figure 2. Overall Core Competency Associations: Adjusted Percentage Difference Compared With the Adjusted Low Ratings Category Outcome**

The table and graph display the adjusted percentage differences in patient outcomes for physicians rated low, medium, and high in core competencies. Outcomes include 7-day and 30-day mortality, 7-day and 30-day readmissions, consultations, and length of stay.

- **7-day Mortality**:
  - Low: 3.4% (3.3 to 3.6)
  - Medium: 3.5% (3.4 to 3.6), % Difference: 2.3% (-2.3 to 6.8)
  - High: 3.5% (3.3 to 3.8), % Difference: 2.7% (-5.2 to 10.6)

- **7-day Readmissions**:
  - Low: 5.6% (5.5 to 5.8)
  - Medium: 5.6% (5.5 to 5.6), % Difference: -1.7% (-4.8 to 1.4)
  - High: 5.6% (5.3 to 5.8), % Difference: -1.7% (-6.9 to 3.6)

- **30-day Mortality**:
  - Low: 8.7% (8.5 to 8.9)
  - Medium: 8.8% (8.7 to 8.9), % Difference: 0.8% (-1.8 to 3.4)
  - High: 8.7% (8.4 to 9.1), % Difference: 0.0% (-4.7 to 4.7)

- **30-day Readmissions**:
  - Low: 16.5% (16.3 to 16.8)
  - Medium: 16.6% (16.5 to 16.7), % Difference: 0.4% (-1.3 to 2.1)
  - High: 16.5% (16.1 to 17.0), % Difference: -0.1% (-3.1 to 2.9)

- **Consultations**:
  - Low: 1.01 (1.00 to 1.02)
  - Medium: 1.01 (1.01 to 1.02), % Difference: 0.1% (-1.2 to 1.3)
  - High: 1.00 (0.99 to 1.02), % Difference: -0.7% (-3.0 to 1.5)

- **Length of Stay**:
  - Low: 3.60 (3.57 to 3.63)
  - Medium: 3.59 (3.58 to 3.61), % Difference: -0.2% (-1.1 to 0.7)
  - High: 3.60 (3.55 to 3.65), % Difference: 0.1% (-1.5 to 1.7)

The graph shows percentage differences for each outcome with blue dots (medium vs low ratings) and orange dots (high vs low ratings). Most differences are small and statistically non-significant.
Table from Gray et al., 2024

Top knowledge examination score quartile had a 8% reduction in 7-day mortality and 9% reduction in readmission compared with the bottom quartile.

Sensitivity analysis showed no impact on outcomes when adjusting for clinical or statistical assumptions.

**Figure: Overall Core Competency Associations: Adjusted Percentage Difference Compared With the Adjusted Low Ratings Category Outcome**

The table and graph show the adjusted percentage differences in patient outcomes for physicians rated in quartiles 1 to 4 based on core competencies. Quartile 1 represents the lowest ratings, while Quartile 4 represents the highest.

- **7-day Mortality**:
  - Quartile 1: 3.7% (3.5 to 3.8)
  - Quartile 2: 3.5% (3.4 to 3.6), % Difference: -4.7% (-9.7 to 0.2)
  - Quartile 3: 3.5% (3.4 to 3.6), % Difference: -4.9% (-10.0 to 0.3)
  - Quartile 4: 3.4% (3.2 to 3.5), % Difference: -8.0% (-13.0 to -3.1)

- **7-day Readmissions**:
  - Quartile 1: 5.8% (5.7 to 6.0)
  - Quartile 2: 5.6% (5.4 to 5.7), % Difference: -4.5% (-8.0 to -0.9)
  - Quartile 3: 5.6% (5.4 to 5.7), % Difference: -4.8% (-8.3 to -1.4)
  - Quartile 4: 5.3% (5.1 to 5.4), % Difference: -9.3% (-13.0 to -5.7)

- **30-day Mortality**:
  - Quartile 1: 8.9% (8.8 to 9.1)
  - Quartile 2: 8.7% (8.6 to 8.9), % Difference: -2.3% (-5.4 to 0.7)
  - Quartile 3: 8.8% (8.7 to 9.0), % Difference: -1.3% (-4.3 to 1.8)
  - Quartile 4: 8.6% (8.4 to 8.8), % Difference: -3.5% (-6.7 to -0.4)

- **30-day Readmissions**:
  - Quartile 1: 16.6% (16.4 to 16.9)
  - Quartile 2: 16.6% (16.4 to 16.9), % Difference: 0.1% (-2.0 to 2.2)
  - Quartile 3: 16.6% (16.3 to 16.8), % Difference: -0.4% (-2.4 to 1.6)
  - Quartile 4: 16.4% (16.2 to 16.7), % Difference: -1.1% (-3.2 to 1.0)

- **Consultations**:
  - Quartile 1: 1.00 (0.99 to 1.01)
  - Quartile 2: 1.01 (0.99 to 1.02), % Difference: 0.2% (-1.3 to 1.7)
  - Quartile 3: 1.02 (1.01 to 1.03), % Difference: 1.4% (-0.1 to 2.9)
  - Quartile 4: 1.03 (1.02 to 1.04), % Difference: 2.4% (0.8 to 3.9)

- **Length of Stay**:
  - Quartile 1: 3.60 (3.57 to 3.63)
  - Quartile 2: 3.60 (3.57 to 3.63), % Difference: 0.3% (-0.8 to 1.4)
  - Quartile 3: 3.61 (3.58 to 3.63), % Difference: 0.5% (-0.6 to 1.6)
  - Quartile 4: 3.59 (3.56 to 3.62), % Difference: 0.1% (-1.0 to 1.2)

The graph shows percentage differences for each outcome with blue dots (Quartile 2 vs Quartile 1), gray dots (Quartile 3 vs Quartile 1), and orange dots (Quartile 4 vs Quartile 1). Quartile 4 generally shows better outcomes compared to Quartile 1, especially in mortality and readmissions, with statistically significant differences in several cases.
Table from Gray et al., 2024, showing the associations between physician competency ratings and various patient outcomes

References

Asch, D. A., Nicholson, S., Srinivas, S. K., Herrin, J., & Epstein, A. J. (2014). How Do You Deliver a Good Obstetrician? Outcome-Based Evaluation of Medical Education: Academic Medicine, 89(1), 24–26.

Chen, C., Petterson, S., Phillips, R., Bazemore, A., & Mullan, F. (2014). Spending Patterns in Region of Residency Training and Subsequent Expenditures for Care Provided by Practicing Physicians for Medicare Beneficiaries. JAMA, 312(22), 2385.

Tamblyn, R. (2002). Association Between Licensure Examination Scores and Practice in Primary Care. JAMA, 288(23), 3019.


0 comments

Related posts

Coming up soon

Live session at ASME on Wednesday 10th July, 16.30-18.00.

ASME logo with text Association for the study of medical education