Skip to main content

Table 2 Mitigation strategies reported

From: Evaluating and addressing demographic disparities in medical large language models: a systematic review

Author et al.

Year

Country

Model Evaluated

Type of Bias Studied

Mitigation Method

Mitigation Results

Bakkum et al.

2024

Netherlands

GPT 3.5

Gender Bias

Prompt Engineering: Iterative Prompt Optimization, Segmented Prompting

Enhanced diversity in medical vignettes; improved inclusivity.

Yeh et al.

2023

Taiwan

GPT-3.5-turbo

Multiple Societal Biases

Prompt Engineering: Contextualization and Disambiguation Techniques

Reduced bias through detailed prompts and disambiguation.

Palacios Barea et al.

2023

Netherlands

GPT-3

Gender, Racial Bias

Prompt Engineering: Thematic Prompts

Identified and reduced biases in gender and racial representation.

Andreadis et al.

2024

USA

GPT-4

Age, Gender, Racial Bias

Prompt Engineering: Demographic Tailoring

Found potential age bias in urgent care recommendations.

Bhardwaj et al.

2021

Singapore

BERT

Gender Bias

Debiasing Algorithm: Gender Debiasing Algorithm using PCA

Significantly reduced gender bias in emotion prediction tasks.

Bozdag et al.

2024

Turkey

LegalBERT-Small

Gender Bias

Debiasing Algorithm: Legal-Context-Debias (LCD)

Reduced gender bias in legal text while maintaining performance.

Doughman et al.

2023

UAE

DistilBERT

Sexism, Multiple Bias

Debiasing Algorithm: Context-Debias Algorithm

Reduced biased predictions in masked language models.

  1. *Abbreviations: PCA: Principal Component Analysis| LCD: Legal-Context-Debias